00:00:00.000 Started by upstream project "autotest-per-patch" build number 132323 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.020 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.021 The recommended git tool is: git 00:00:00.022 using credential 00000000-0000-0000-0000-000000000002 00:00:00.024 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.037 Fetching changes from the remote Git repository 00:00:00.039 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.053 Using shallow fetch with depth 1 00:00:00.053 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.053 > git --version # timeout=10 00:00:00.066 > git --version # 'git version 2.39.2' 00:00:00.066 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.081 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.081 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.304 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.315 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.326 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.326 > git config core.sparsecheckout # timeout=10 00:00:02.339 > git read-tree -mu HEAD # timeout=10 00:00:02.355 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.375 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.375 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.586 [Pipeline] Start of Pipeline 00:00:02.602 [Pipeline] library 00:00:02.603 Loading library shm_lib@master 00:00:02.604 Library shm_lib@master is cached. Copying from home. 00:00:02.622 [Pipeline] node 00:00:02.629 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest_2 00:00:02.634 [Pipeline] { 00:00:02.641 [Pipeline] catchError 00:00:02.642 [Pipeline] { 00:00:02.653 [Pipeline] wrap 00:00:02.662 [Pipeline] { 00:00:02.673 [Pipeline] stage 00:00:02.675 [Pipeline] { (Prologue) 00:00:02.699 [Pipeline] echo 00:00:02.701 Node: VM-host-WFP7 00:00:02.710 [Pipeline] cleanWs 00:00:02.722 [WS-CLEANUP] Deleting project workspace... 00:00:02.722 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.729 [WS-CLEANUP] done 00:00:02.957 [Pipeline] setCustomBuildProperty 00:00:03.046 [Pipeline] httpRequest 00:00:03.586 [Pipeline] echo 00:00:03.588 Sorcerer 10.211.164.20 is alive 00:00:03.596 [Pipeline] retry 00:00:03.599 [Pipeline] { 00:00:03.612 [Pipeline] httpRequest 00:00:03.616 HttpMethod: GET 00:00:03.617 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.617 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.618 Response Code: HTTP/1.1 200 OK 00:00:03.618 Success: Status code 200 is in the accepted range: 200,404 00:00:03.619 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.764 [Pipeline] } 00:00:03.781 [Pipeline] // retry 00:00:03.789 [Pipeline] sh 00:00:04.073 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:04.090 [Pipeline] httpRequest 00:00:04.378 [Pipeline] echo 00:00:04.380 Sorcerer 10.211.164.20 is alive 00:00:04.391 [Pipeline] retry 00:00:04.393 [Pipeline] { 00:00:04.407 [Pipeline] httpRequest 00:00:04.411 HttpMethod: GET 00:00:04.412 URL: http://10.211.164.20/packages/spdk_dcc2ca8f30ea717d7f66cc9c92d44faa802d2c19.tar.gz 00:00:04.412 Sending request to url: http://10.211.164.20/packages/spdk_dcc2ca8f30ea717d7f66cc9c92d44faa802d2c19.tar.gz 00:00:04.414 Response Code: HTTP/1.1 200 OK 00:00:04.415 Success: Status code 200 is in the accepted range: 200,404 00:00:04.415 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/spdk_dcc2ca8f30ea717d7f66cc9c92d44faa802d2c19.tar.gz 00:00:33.785 [Pipeline] } 00:00:33.805 [Pipeline] // retry 00:00:33.815 [Pipeline] sh 00:00:34.093 + tar --no-same-owner -xf spdk_dcc2ca8f30ea717d7f66cc9c92d44faa802d2c19.tar.gz 00:00:37.483 [Pipeline] sh 00:00:37.761 + git -C spdk log --oneline -n5 00:00:37.761 dcc2ca8f3 bdev: fix per_channel data null when bdev_get_iostat with reset option 00:00:37.761 73f18e890 lib/reduce: fix the magic number of empty mapping detection. 00:00:37.761 029355612 bdev_ut: add manual examine bdev unit test case 00:00:37.761 fc96810c2 bdev: remove bdev from examine allow list on unregister 00:00:37.761 a0c128549 bdev/nvme: Make bdev nvme get and set opts APIs public 00:00:37.777 [Pipeline] writeFile 00:00:37.791 [Pipeline] sh 00:00:38.069 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:38.078 [Pipeline] sh 00:00:38.352 + cat autorun-spdk.conf 00:00:38.352 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.352 SPDK_RUN_ASAN=1 00:00:38.352 SPDK_RUN_UBSAN=1 00:00:38.352 SPDK_TEST_RAID=1 00:00:38.352 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:38.359 RUN_NIGHTLY=0 00:00:38.361 [Pipeline] } 00:00:38.374 [Pipeline] // stage 00:00:38.388 [Pipeline] stage 00:00:38.390 [Pipeline] { (Run VM) 00:00:38.402 [Pipeline] sh 00:00:38.680 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:38.680 + echo 'Start stage prepare_nvme.sh' 00:00:38.680 Start stage prepare_nvme.sh 00:00:38.680 + [[ -n 6 ]] 00:00:38.680 + disk_prefix=ex6 00:00:38.680 + [[ -n /var/jenkins/workspace/raid-vg-autotest_2 ]] 00:00:38.680 + [[ -e /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf ]] 00:00:38.680 + source /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf 00:00:38.680 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:38.680 ++ SPDK_RUN_ASAN=1 00:00:38.680 ++ SPDK_RUN_UBSAN=1 00:00:38.680 ++ SPDK_TEST_RAID=1 00:00:38.680 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:38.680 ++ RUN_NIGHTLY=0 00:00:38.680 + cd /var/jenkins/workspace/raid-vg-autotest_2 00:00:38.680 + nvme_files=() 00:00:38.680 + declare -A nvme_files 00:00:38.680 + backend_dir=/var/lib/libvirt/images/backends 00:00:38.680 + nvme_files['nvme.img']=5G 00:00:38.680 + nvme_files['nvme-cmb.img']=5G 00:00:38.680 + nvme_files['nvme-multi0.img']=4G 00:00:38.680 + nvme_files['nvme-multi1.img']=4G 00:00:38.680 + nvme_files['nvme-multi2.img']=4G 00:00:38.680 + nvme_files['nvme-openstack.img']=8G 00:00:38.680 + nvme_files['nvme-zns.img']=5G 00:00:38.680 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:38.680 + (( SPDK_TEST_FTL == 1 )) 00:00:38.680 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:38.680 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:38.680 + for nvme in "${!nvme_files[@]}" 00:00:38.680 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:00:38.680 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:38.680 + for nvme in "${!nvme_files[@]}" 00:00:38.680 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:00:38.680 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:38.680 + for nvme in "${!nvme_files[@]}" 00:00:38.680 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:00:38.680 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:38.680 + for nvme in "${!nvme_files[@]}" 00:00:38.680 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:00:38.680 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:38.680 + for nvme in "${!nvme_files[@]}" 00:00:38.680 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:00:38.680 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:38.680 + for nvme in "${!nvme_files[@]}" 00:00:38.680 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:00:38.680 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:38.680 + for nvme in "${!nvme_files[@]}" 00:00:38.680 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:00:39.615 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:39.615 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:00:39.615 + echo 'End stage prepare_nvme.sh' 00:00:39.615 End stage prepare_nvme.sh 00:00:39.626 [Pipeline] sh 00:00:39.909 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:39.909 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora39 00:00:39.909 00:00:39.909 DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant 00:00:39.909 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk 00:00:39.909 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_2 00:00:39.909 HELP=0 00:00:39.909 DRY_RUN=0 00:00:39.909 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:00:39.909 NVME_DISKS_TYPE=nvme,nvme, 00:00:39.909 NVME_AUTO_CREATE=0 00:00:39.909 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:00:39.909 NVME_CMB=,, 00:00:39.909 NVME_PMR=,, 00:00:39.909 NVME_ZNS=,, 00:00:39.909 NVME_MS=,, 00:00:39.909 NVME_FDP=,, 00:00:39.909 SPDK_VAGRANT_DISTRO=fedora39 00:00:39.909 SPDK_VAGRANT_VMCPU=10 00:00:39.909 SPDK_VAGRANT_VMRAM=12288 00:00:39.909 SPDK_VAGRANT_PROVIDER=libvirt 00:00:39.909 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:39.909 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:39.909 SPDK_OPENSTACK_NETWORK=0 00:00:39.909 VAGRANT_PACKAGE_BOX=0 00:00:39.909 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:39.909 FORCE_DISTRO=true 00:00:39.909 VAGRANT_BOX_VERSION= 00:00:39.909 EXTRA_VAGRANTFILES= 00:00:39.909 NIC_MODEL=virtio 00:00:39.909 00:00:39.909 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt' 00:00:39.909 /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_2 00:00:42.485 Bringing machine 'default' up with 'libvirt' provider... 00:00:42.744 ==> default: Creating image (snapshot of base box volume). 00:00:42.744 ==> default: Creating domain with the following settings... 00:00:42.744 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732011116_00fba18fec326f84a71a 00:00:42.744 ==> default: -- Domain type: kvm 00:00:42.744 ==> default: -- Cpus: 10 00:00:42.744 ==> default: -- Feature: acpi 00:00:42.744 ==> default: -- Feature: apic 00:00:42.744 ==> default: -- Feature: pae 00:00:42.744 ==> default: -- Memory: 12288M 00:00:42.744 ==> default: -- Memory Backing: hugepages: 00:00:42.744 ==> default: -- Management MAC: 00:00:42.744 ==> default: -- Loader: 00:00:42.744 ==> default: -- Nvram: 00:00:42.744 ==> default: -- Base box: spdk/fedora39 00:00:42.744 ==> default: -- Storage pool: default 00:00:42.744 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732011116_00fba18fec326f84a71a.img (20G) 00:00:42.744 ==> default: -- Volume Cache: default 00:00:42.744 ==> default: -- Kernel: 00:00:42.744 ==> default: -- Initrd: 00:00:42.744 ==> default: -- Graphics Type: vnc 00:00:42.744 ==> default: -- Graphics Port: -1 00:00:42.744 ==> default: -- Graphics IP: 127.0.0.1 00:00:42.744 ==> default: -- Graphics Password: Not defined 00:00:42.744 ==> default: -- Video Type: cirrus 00:00:42.744 ==> default: -- Video VRAM: 9216 00:00:42.744 ==> default: -- Sound Type: 00:00:42.744 ==> default: -- Keymap: en-us 00:00:42.744 ==> default: -- TPM Path: 00:00:42.744 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:42.744 ==> default: -- Command line args: 00:00:42.744 ==> default: -> value=-device, 00:00:42.744 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:42.744 ==> default: -> value=-drive, 00:00:42.744 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:00:42.744 ==> default: -> value=-device, 00:00:42.744 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:42.744 ==> default: -> value=-device, 00:00:42.744 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:42.744 ==> default: -> value=-drive, 00:00:42.744 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:42.744 ==> default: -> value=-device, 00:00:42.744 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:42.744 ==> default: -> value=-drive, 00:00:42.744 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:42.744 ==> default: -> value=-device, 00:00:42.744 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:42.744 ==> default: -> value=-drive, 00:00:42.744 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:42.744 ==> default: -> value=-device, 00:00:42.744 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:43.002 ==> default: Creating shared folders metadata... 00:00:43.002 ==> default: Starting domain. 00:00:44.378 ==> default: Waiting for domain to get an IP address... 00:01:02.462 ==> default: Waiting for SSH to become available... 00:01:02.462 ==> default: Configuring and enabling network interfaces... 00:01:06.653 default: SSH address: 192.168.121.127:22 00:01:06.653 default: SSH username: vagrant 00:01:06.653 default: SSH auth method: private key 00:01:09.942 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:18.066 ==> default: Mounting SSHFS shared folder... 00:01:20.608 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:20.608 ==> default: Checking Mount.. 00:01:22.516 ==> default: Folder Successfully Mounted! 00:01:22.516 ==> default: Running provisioner: file... 00:01:23.088 default: ~/.gitconfig => .gitconfig 00:01:23.660 00:01:23.660 SUCCESS! 00:01:23.660 00:01:23.660 cd to /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:01:23.660 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:23.660 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:01:23.660 00:01:23.670 [Pipeline] } 00:01:23.686 [Pipeline] // stage 00:01:23.693 [Pipeline] dir 00:01:23.694 Running in /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt 00:01:23.696 [Pipeline] { 00:01:23.708 [Pipeline] catchError 00:01:23.710 [Pipeline] { 00:01:23.721 [Pipeline] sh 00:01:24.003 + vagrant ssh-config --host vagrant 00:01:24.003 + sed -ne /^Host/,$p 00:01:24.003 + tee ssh_conf 00:01:26.543 Host vagrant 00:01:26.543 HostName 192.168.121.127 00:01:26.543 User vagrant 00:01:26.543 Port 22 00:01:26.543 UserKnownHostsFile /dev/null 00:01:26.543 StrictHostKeyChecking no 00:01:26.543 PasswordAuthentication no 00:01:26.543 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:26.543 IdentitiesOnly yes 00:01:26.543 LogLevel FATAL 00:01:26.543 ForwardAgent yes 00:01:26.543 ForwardX11 yes 00:01:26.543 00:01:26.556 [Pipeline] withEnv 00:01:26.558 [Pipeline] { 00:01:26.571 [Pipeline] sh 00:01:26.852 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:26.852 source /etc/os-release 00:01:26.852 [[ -e /image.version ]] && img=$(< /image.version) 00:01:26.852 # Minimal, systemd-like check. 00:01:26.852 if [[ -e /.dockerenv ]]; then 00:01:26.852 # Clear garbage from the node's name: 00:01:26.852 # agt-er_autotest_547-896 -> autotest_547-896 00:01:26.852 # $HOSTNAME is the actual container id 00:01:26.852 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:26.852 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:26.852 # We can assume this is a mount from a host where container is running, 00:01:26.852 # so fetch its hostname to easily identify the target swarm worker. 00:01:26.852 container="$(< /etc/hostname) ($agent)" 00:01:26.852 else 00:01:26.852 # Fallback 00:01:26.852 container=$agent 00:01:26.852 fi 00:01:26.852 fi 00:01:26.852 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:26.852 00:01:27.123 [Pipeline] } 00:01:27.139 [Pipeline] // withEnv 00:01:27.147 [Pipeline] setCustomBuildProperty 00:01:27.160 [Pipeline] stage 00:01:27.162 [Pipeline] { (Tests) 00:01:27.180 [Pipeline] sh 00:01:27.493 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:27.769 [Pipeline] sh 00:01:28.052 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:28.325 [Pipeline] timeout 00:01:28.325 Timeout set to expire in 1 hr 30 min 00:01:28.327 [Pipeline] { 00:01:28.339 [Pipeline] sh 00:01:28.621 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:29.190 HEAD is now at dcc2ca8f3 bdev: fix per_channel data null when bdev_get_iostat with reset option 00:01:29.203 [Pipeline] sh 00:01:29.486 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:29.759 [Pipeline] sh 00:01:30.041 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:30.317 [Pipeline] sh 00:01:30.600 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:30.859 ++ readlink -f spdk_repo 00:01:30.859 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:30.859 + [[ -n /home/vagrant/spdk_repo ]] 00:01:30.859 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:30.859 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:30.859 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:30.859 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:30.859 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:30.859 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:30.859 + cd /home/vagrant/spdk_repo 00:01:30.859 + source /etc/os-release 00:01:30.860 ++ NAME='Fedora Linux' 00:01:30.860 ++ VERSION='39 (Cloud Edition)' 00:01:30.860 ++ ID=fedora 00:01:30.860 ++ VERSION_ID=39 00:01:30.860 ++ VERSION_CODENAME= 00:01:30.860 ++ PLATFORM_ID=platform:f39 00:01:30.860 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:30.860 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:30.860 ++ LOGO=fedora-logo-icon 00:01:30.860 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:30.860 ++ HOME_URL=https://fedoraproject.org/ 00:01:30.860 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:30.860 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:30.860 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:30.860 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:30.860 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:30.860 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:30.860 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:30.860 ++ SUPPORT_END=2024-11-12 00:01:30.860 ++ VARIANT='Cloud Edition' 00:01:30.860 ++ VARIANT_ID=cloud 00:01:30.860 + uname -a 00:01:30.860 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:30.860 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:31.430 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:31.430 Hugepages 00:01:31.430 node hugesize free / total 00:01:31.430 node0 1048576kB 0 / 0 00:01:31.430 node0 2048kB 0 / 0 00:01:31.430 00:01:31.430 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:31.430 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:31.430 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:31.430 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:01:31.430 + rm -f /tmp/spdk-ld-path 00:01:31.430 + source autorun-spdk.conf 00:01:31.430 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.430 ++ SPDK_RUN_ASAN=1 00:01:31.430 ++ SPDK_RUN_UBSAN=1 00:01:31.430 ++ SPDK_TEST_RAID=1 00:01:31.430 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:31.430 ++ RUN_NIGHTLY=0 00:01:31.430 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:31.430 + [[ -n '' ]] 00:01:31.430 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:31.430 + for M in /var/spdk/build-*-manifest.txt 00:01:31.430 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:31.430 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:31.430 + for M in /var/spdk/build-*-manifest.txt 00:01:31.430 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:31.430 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:31.430 + for M in /var/spdk/build-*-manifest.txt 00:01:31.430 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:31.430 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:31.430 ++ uname 00:01:31.430 + [[ Linux == \L\i\n\u\x ]] 00:01:31.430 + sudo dmesg -T 00:01:31.692 + sudo dmesg --clear 00:01:31.692 + dmesg_pid=5423 00:01:31.692 + [[ Fedora Linux == FreeBSD ]] 00:01:31.692 + sudo dmesg -Tw 00:01:31.692 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:31.692 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:31.692 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:31.692 + [[ -x /usr/src/fio-static/fio ]] 00:01:31.692 + export FIO_BIN=/usr/src/fio-static/fio 00:01:31.692 + FIO_BIN=/usr/src/fio-static/fio 00:01:31.692 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:31.692 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:31.692 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:31.692 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:31.692 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:31.692 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:31.692 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:31.692 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:31.692 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:31.692 10:12:45 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:31.692 10:12:45 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:31.692 10:12:45 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.692 10:12:45 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:31.692 10:12:45 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:31.692 10:12:45 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:31.692 10:12:45 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:31.692 10:12:45 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:01:31.692 10:12:45 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:31.692 10:12:45 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:31.692 10:12:45 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:31.692 10:12:45 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:31.692 10:12:45 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:31.692 10:12:45 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:31.692 10:12:45 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:31.692 10:12:45 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:31.692 10:12:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.692 10:12:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.693 10:12:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.693 10:12:45 -- paths/export.sh@5 -- $ export PATH 00:01:31.693 10:12:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.693 10:12:45 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:31.693 10:12:45 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:31.693 10:12:45 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732011165.XXXXXX 00:01:31.693 10:12:45 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732011165.SDsNka 00:01:31.693 10:12:45 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:31.693 10:12:45 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:31.693 10:12:45 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:31.693 10:12:45 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:31.693 10:12:45 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:31.693 10:12:45 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:31.693 10:12:45 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:31.693 10:12:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.961 10:12:45 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:31.962 10:12:45 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:31.962 10:12:45 -- pm/common@17 -- $ local monitor 00:01:31.962 10:12:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:31.962 10:12:45 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:31.962 10:12:45 -- pm/common@25 -- $ sleep 1 00:01:31.962 10:12:45 -- pm/common@21 -- $ date +%s 00:01:31.962 10:12:45 -- pm/common@21 -- $ date +%s 00:01:31.962 10:12:45 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732011165 00:01:31.962 10:12:45 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732011165 00:01:31.962 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732011165_collect-cpu-load.pm.log 00:01:31.962 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732011165_collect-vmstat.pm.log 00:01:32.924 10:12:46 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:32.924 10:12:46 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:32.924 10:12:46 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:32.924 10:12:46 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:32.924 10:12:46 -- spdk/autobuild.sh@16 -- $ date -u 00:01:32.924 Tue Nov 19 10:12:46 AM UTC 2024 00:01:32.924 10:12:46 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:32.924 v25.01-pre-197-gdcc2ca8f3 00:01:32.924 10:12:46 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:32.924 10:12:46 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:32.924 10:12:46 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:32.924 10:12:46 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:32.924 10:12:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:32.924 ************************************ 00:01:32.924 START TEST asan 00:01:32.924 ************************************ 00:01:32.924 using asan 00:01:32.924 10:12:46 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:32.924 00:01:32.924 real 0m0.000s 00:01:32.924 user 0m0.000s 00:01:32.924 sys 0m0.000s 00:01:32.924 10:12:46 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:32.924 10:12:46 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:32.924 ************************************ 00:01:32.924 END TEST asan 00:01:32.924 ************************************ 00:01:32.924 10:12:46 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:32.924 10:12:46 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:32.924 10:12:46 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:32.924 10:12:46 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:32.924 10:12:46 -- common/autotest_common.sh@10 -- $ set +x 00:01:32.924 ************************************ 00:01:32.924 START TEST ubsan 00:01:32.924 ************************************ 00:01:32.924 using ubsan 00:01:32.924 10:12:46 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:32.924 00:01:32.924 real 0m0.000s 00:01:32.924 user 0m0.000s 00:01:32.924 sys 0m0.000s 00:01:32.924 10:12:46 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:32.924 10:12:46 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:32.924 ************************************ 00:01:32.924 END TEST ubsan 00:01:32.924 ************************************ 00:01:32.924 10:12:46 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:32.924 10:12:46 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:32.924 10:12:46 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:32.924 10:12:46 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:32.924 10:12:46 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:32.924 10:12:46 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:32.924 10:12:46 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:32.924 10:12:46 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:32.924 10:12:46 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:33.183 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:33.183 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:33.753 Using 'verbs' RDMA provider 00:01:49.580 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:04.471 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:04.732 Creating mk/config.mk...done. 00:02:04.732 Creating mk/cc.flags.mk...done. 00:02:04.732 Type 'make' to build. 00:02:04.732 10:13:18 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:04.732 10:13:18 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:04.732 10:13:18 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:04.732 10:13:18 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.991 ************************************ 00:02:04.991 START TEST make 00:02:04.991 ************************************ 00:02:04.991 10:13:18 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:05.251 make[1]: Nothing to be done for 'all'. 00:02:15.259 The Meson build system 00:02:15.259 Version: 1.5.0 00:02:15.259 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:15.259 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:15.259 Build type: native build 00:02:15.259 Program cat found: YES (/usr/bin/cat) 00:02:15.259 Project name: DPDK 00:02:15.259 Project version: 24.03.0 00:02:15.259 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:15.259 C linker for the host machine: cc ld.bfd 2.40-14 00:02:15.259 Host machine cpu family: x86_64 00:02:15.259 Host machine cpu: x86_64 00:02:15.259 Message: ## Building in Developer Mode ## 00:02:15.259 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:15.259 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:15.259 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:15.259 Program python3 found: YES (/usr/bin/python3) 00:02:15.259 Program cat found: YES (/usr/bin/cat) 00:02:15.259 Compiler for C supports arguments -march=native: YES 00:02:15.259 Checking for size of "void *" : 8 00:02:15.259 Checking for size of "void *" : 8 (cached) 00:02:15.259 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:15.259 Library m found: YES 00:02:15.259 Library numa found: YES 00:02:15.259 Has header "numaif.h" : YES 00:02:15.259 Library fdt found: NO 00:02:15.259 Library execinfo found: NO 00:02:15.259 Has header "execinfo.h" : YES 00:02:15.259 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:15.259 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:15.259 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:15.259 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:15.259 Run-time dependency openssl found: YES 3.1.1 00:02:15.259 Run-time dependency libpcap found: YES 1.10.4 00:02:15.259 Has header "pcap.h" with dependency libpcap: YES 00:02:15.259 Compiler for C supports arguments -Wcast-qual: YES 00:02:15.259 Compiler for C supports arguments -Wdeprecated: YES 00:02:15.259 Compiler for C supports arguments -Wformat: YES 00:02:15.259 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:15.259 Compiler for C supports arguments -Wformat-security: NO 00:02:15.259 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:15.259 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:15.259 Compiler for C supports arguments -Wnested-externs: YES 00:02:15.259 Compiler for C supports arguments -Wold-style-definition: YES 00:02:15.259 Compiler for C supports arguments -Wpointer-arith: YES 00:02:15.259 Compiler for C supports arguments -Wsign-compare: YES 00:02:15.259 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:15.259 Compiler for C supports arguments -Wundef: YES 00:02:15.259 Compiler for C supports arguments -Wwrite-strings: YES 00:02:15.259 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:15.260 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:15.260 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:15.260 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:15.260 Program objdump found: YES (/usr/bin/objdump) 00:02:15.260 Compiler for C supports arguments -mavx512f: YES 00:02:15.260 Checking if "AVX512 checking" compiles: YES 00:02:15.260 Fetching value of define "__SSE4_2__" : 1 00:02:15.260 Fetching value of define "__AES__" : 1 00:02:15.260 Fetching value of define "__AVX__" : 1 00:02:15.260 Fetching value of define "__AVX2__" : 1 00:02:15.260 Fetching value of define "__AVX512BW__" : 1 00:02:15.260 Fetching value of define "__AVX512CD__" : 1 00:02:15.260 Fetching value of define "__AVX512DQ__" : 1 00:02:15.260 Fetching value of define "__AVX512F__" : 1 00:02:15.260 Fetching value of define "__AVX512VL__" : 1 00:02:15.260 Fetching value of define "__PCLMUL__" : 1 00:02:15.260 Fetching value of define "__RDRND__" : 1 00:02:15.260 Fetching value of define "__RDSEED__" : 1 00:02:15.260 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:15.260 Fetching value of define "__znver1__" : (undefined) 00:02:15.260 Fetching value of define "__znver2__" : (undefined) 00:02:15.260 Fetching value of define "__znver3__" : (undefined) 00:02:15.260 Fetching value of define "__znver4__" : (undefined) 00:02:15.260 Library asan found: YES 00:02:15.260 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:15.260 Message: lib/log: Defining dependency "log" 00:02:15.260 Message: lib/kvargs: Defining dependency "kvargs" 00:02:15.260 Message: lib/telemetry: Defining dependency "telemetry" 00:02:15.260 Library rt found: YES 00:02:15.260 Checking for function "getentropy" : NO 00:02:15.260 Message: lib/eal: Defining dependency "eal" 00:02:15.260 Message: lib/ring: Defining dependency "ring" 00:02:15.260 Message: lib/rcu: Defining dependency "rcu" 00:02:15.260 Message: lib/mempool: Defining dependency "mempool" 00:02:15.260 Message: lib/mbuf: Defining dependency "mbuf" 00:02:15.260 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:15.260 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:15.260 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:15.260 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:15.260 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:15.260 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:15.260 Compiler for C supports arguments -mpclmul: YES 00:02:15.260 Compiler for C supports arguments -maes: YES 00:02:15.260 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:15.260 Compiler for C supports arguments -mavx512bw: YES 00:02:15.260 Compiler for C supports arguments -mavx512dq: YES 00:02:15.260 Compiler for C supports arguments -mavx512vl: YES 00:02:15.260 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:15.260 Compiler for C supports arguments -mavx2: YES 00:02:15.260 Compiler for C supports arguments -mavx: YES 00:02:15.260 Message: lib/net: Defining dependency "net" 00:02:15.260 Message: lib/meter: Defining dependency "meter" 00:02:15.260 Message: lib/ethdev: Defining dependency "ethdev" 00:02:15.260 Message: lib/pci: Defining dependency "pci" 00:02:15.260 Message: lib/cmdline: Defining dependency "cmdline" 00:02:15.260 Message: lib/hash: Defining dependency "hash" 00:02:15.260 Message: lib/timer: Defining dependency "timer" 00:02:15.260 Message: lib/compressdev: Defining dependency "compressdev" 00:02:15.260 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:15.260 Message: lib/dmadev: Defining dependency "dmadev" 00:02:15.260 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:15.260 Message: lib/power: Defining dependency "power" 00:02:15.260 Message: lib/reorder: Defining dependency "reorder" 00:02:15.260 Message: lib/security: Defining dependency "security" 00:02:15.260 Has header "linux/userfaultfd.h" : YES 00:02:15.260 Has header "linux/vduse.h" : YES 00:02:15.260 Message: lib/vhost: Defining dependency "vhost" 00:02:15.260 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:15.260 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:15.260 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:15.260 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:15.260 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:15.260 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:15.260 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:15.260 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:15.260 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:15.260 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:15.260 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:15.260 Configuring doxy-api-html.conf using configuration 00:02:15.260 Configuring doxy-api-man.conf using configuration 00:02:15.260 Program mandb found: YES (/usr/bin/mandb) 00:02:15.260 Program sphinx-build found: NO 00:02:15.260 Configuring rte_build_config.h using configuration 00:02:15.260 Message: 00:02:15.260 ================= 00:02:15.260 Applications Enabled 00:02:15.260 ================= 00:02:15.260 00:02:15.260 apps: 00:02:15.260 00:02:15.260 00:02:15.260 Message: 00:02:15.260 ================= 00:02:15.260 Libraries Enabled 00:02:15.260 ================= 00:02:15.260 00:02:15.260 libs: 00:02:15.260 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:15.260 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:15.260 cryptodev, dmadev, power, reorder, security, vhost, 00:02:15.260 00:02:15.260 Message: 00:02:15.260 =============== 00:02:15.260 Drivers Enabled 00:02:15.260 =============== 00:02:15.260 00:02:15.260 common: 00:02:15.260 00:02:15.260 bus: 00:02:15.260 pci, vdev, 00:02:15.260 mempool: 00:02:15.260 ring, 00:02:15.260 dma: 00:02:15.260 00:02:15.260 net: 00:02:15.260 00:02:15.260 crypto: 00:02:15.260 00:02:15.260 compress: 00:02:15.260 00:02:15.260 vdpa: 00:02:15.260 00:02:15.260 00:02:15.260 Message: 00:02:15.260 ================= 00:02:15.260 Content Skipped 00:02:15.260 ================= 00:02:15.260 00:02:15.260 apps: 00:02:15.260 dumpcap: explicitly disabled via build config 00:02:15.260 graph: explicitly disabled via build config 00:02:15.260 pdump: explicitly disabled via build config 00:02:15.260 proc-info: explicitly disabled via build config 00:02:15.260 test-acl: explicitly disabled via build config 00:02:15.260 test-bbdev: explicitly disabled via build config 00:02:15.260 test-cmdline: explicitly disabled via build config 00:02:15.260 test-compress-perf: explicitly disabled via build config 00:02:15.260 test-crypto-perf: explicitly disabled via build config 00:02:15.260 test-dma-perf: explicitly disabled via build config 00:02:15.260 test-eventdev: explicitly disabled via build config 00:02:15.260 test-fib: explicitly disabled via build config 00:02:15.260 test-flow-perf: explicitly disabled via build config 00:02:15.260 test-gpudev: explicitly disabled via build config 00:02:15.260 test-mldev: explicitly disabled via build config 00:02:15.260 test-pipeline: explicitly disabled via build config 00:02:15.260 test-pmd: explicitly disabled via build config 00:02:15.260 test-regex: explicitly disabled via build config 00:02:15.260 test-sad: explicitly disabled via build config 00:02:15.260 test-security-perf: explicitly disabled via build config 00:02:15.260 00:02:15.260 libs: 00:02:15.260 argparse: explicitly disabled via build config 00:02:15.260 metrics: explicitly disabled via build config 00:02:15.260 acl: explicitly disabled via build config 00:02:15.260 bbdev: explicitly disabled via build config 00:02:15.260 bitratestats: explicitly disabled via build config 00:02:15.260 bpf: explicitly disabled via build config 00:02:15.260 cfgfile: explicitly disabled via build config 00:02:15.260 distributor: explicitly disabled via build config 00:02:15.260 efd: explicitly disabled via build config 00:02:15.260 eventdev: explicitly disabled via build config 00:02:15.260 dispatcher: explicitly disabled via build config 00:02:15.260 gpudev: explicitly disabled via build config 00:02:15.260 gro: explicitly disabled via build config 00:02:15.260 gso: explicitly disabled via build config 00:02:15.260 ip_frag: explicitly disabled via build config 00:02:15.260 jobstats: explicitly disabled via build config 00:02:15.260 latencystats: explicitly disabled via build config 00:02:15.260 lpm: explicitly disabled via build config 00:02:15.260 member: explicitly disabled via build config 00:02:15.260 pcapng: explicitly disabled via build config 00:02:15.260 rawdev: explicitly disabled via build config 00:02:15.260 regexdev: explicitly disabled via build config 00:02:15.260 mldev: explicitly disabled via build config 00:02:15.260 rib: explicitly disabled via build config 00:02:15.260 sched: explicitly disabled via build config 00:02:15.260 stack: explicitly disabled via build config 00:02:15.260 ipsec: explicitly disabled via build config 00:02:15.260 pdcp: explicitly disabled via build config 00:02:15.260 fib: explicitly disabled via build config 00:02:15.260 port: explicitly disabled via build config 00:02:15.260 pdump: explicitly disabled via build config 00:02:15.260 table: explicitly disabled via build config 00:02:15.260 pipeline: explicitly disabled via build config 00:02:15.260 graph: explicitly disabled via build config 00:02:15.260 node: explicitly disabled via build config 00:02:15.261 00:02:15.261 drivers: 00:02:15.261 common/cpt: not in enabled drivers build config 00:02:15.261 common/dpaax: not in enabled drivers build config 00:02:15.261 common/iavf: not in enabled drivers build config 00:02:15.261 common/idpf: not in enabled drivers build config 00:02:15.261 common/ionic: not in enabled drivers build config 00:02:15.261 common/mvep: not in enabled drivers build config 00:02:15.261 common/octeontx: not in enabled drivers build config 00:02:15.261 bus/auxiliary: not in enabled drivers build config 00:02:15.261 bus/cdx: not in enabled drivers build config 00:02:15.261 bus/dpaa: not in enabled drivers build config 00:02:15.261 bus/fslmc: not in enabled drivers build config 00:02:15.261 bus/ifpga: not in enabled drivers build config 00:02:15.261 bus/platform: not in enabled drivers build config 00:02:15.261 bus/uacce: not in enabled drivers build config 00:02:15.261 bus/vmbus: not in enabled drivers build config 00:02:15.261 common/cnxk: not in enabled drivers build config 00:02:15.261 common/mlx5: not in enabled drivers build config 00:02:15.261 common/nfp: not in enabled drivers build config 00:02:15.261 common/nitrox: not in enabled drivers build config 00:02:15.261 common/qat: not in enabled drivers build config 00:02:15.261 common/sfc_efx: not in enabled drivers build config 00:02:15.261 mempool/bucket: not in enabled drivers build config 00:02:15.261 mempool/cnxk: not in enabled drivers build config 00:02:15.261 mempool/dpaa: not in enabled drivers build config 00:02:15.261 mempool/dpaa2: not in enabled drivers build config 00:02:15.261 mempool/octeontx: not in enabled drivers build config 00:02:15.261 mempool/stack: not in enabled drivers build config 00:02:15.261 dma/cnxk: not in enabled drivers build config 00:02:15.261 dma/dpaa: not in enabled drivers build config 00:02:15.261 dma/dpaa2: not in enabled drivers build config 00:02:15.261 dma/hisilicon: not in enabled drivers build config 00:02:15.261 dma/idxd: not in enabled drivers build config 00:02:15.261 dma/ioat: not in enabled drivers build config 00:02:15.261 dma/skeleton: not in enabled drivers build config 00:02:15.261 net/af_packet: not in enabled drivers build config 00:02:15.261 net/af_xdp: not in enabled drivers build config 00:02:15.261 net/ark: not in enabled drivers build config 00:02:15.261 net/atlantic: not in enabled drivers build config 00:02:15.261 net/avp: not in enabled drivers build config 00:02:15.261 net/axgbe: not in enabled drivers build config 00:02:15.261 net/bnx2x: not in enabled drivers build config 00:02:15.261 net/bnxt: not in enabled drivers build config 00:02:15.261 net/bonding: not in enabled drivers build config 00:02:15.261 net/cnxk: not in enabled drivers build config 00:02:15.261 net/cpfl: not in enabled drivers build config 00:02:15.261 net/cxgbe: not in enabled drivers build config 00:02:15.261 net/dpaa: not in enabled drivers build config 00:02:15.261 net/dpaa2: not in enabled drivers build config 00:02:15.261 net/e1000: not in enabled drivers build config 00:02:15.261 net/ena: not in enabled drivers build config 00:02:15.261 net/enetc: not in enabled drivers build config 00:02:15.261 net/enetfec: not in enabled drivers build config 00:02:15.261 net/enic: not in enabled drivers build config 00:02:15.261 net/failsafe: not in enabled drivers build config 00:02:15.261 net/fm10k: not in enabled drivers build config 00:02:15.261 net/gve: not in enabled drivers build config 00:02:15.261 net/hinic: not in enabled drivers build config 00:02:15.261 net/hns3: not in enabled drivers build config 00:02:15.261 net/i40e: not in enabled drivers build config 00:02:15.261 net/iavf: not in enabled drivers build config 00:02:15.261 net/ice: not in enabled drivers build config 00:02:15.261 net/idpf: not in enabled drivers build config 00:02:15.261 net/igc: not in enabled drivers build config 00:02:15.261 net/ionic: not in enabled drivers build config 00:02:15.261 net/ipn3ke: not in enabled drivers build config 00:02:15.261 net/ixgbe: not in enabled drivers build config 00:02:15.261 net/mana: not in enabled drivers build config 00:02:15.261 net/memif: not in enabled drivers build config 00:02:15.261 net/mlx4: not in enabled drivers build config 00:02:15.261 net/mlx5: not in enabled drivers build config 00:02:15.261 net/mvneta: not in enabled drivers build config 00:02:15.261 net/mvpp2: not in enabled drivers build config 00:02:15.261 net/netvsc: not in enabled drivers build config 00:02:15.261 net/nfb: not in enabled drivers build config 00:02:15.261 net/nfp: not in enabled drivers build config 00:02:15.261 net/ngbe: not in enabled drivers build config 00:02:15.261 net/null: not in enabled drivers build config 00:02:15.261 net/octeontx: not in enabled drivers build config 00:02:15.261 net/octeon_ep: not in enabled drivers build config 00:02:15.261 net/pcap: not in enabled drivers build config 00:02:15.261 net/pfe: not in enabled drivers build config 00:02:15.261 net/qede: not in enabled drivers build config 00:02:15.261 net/ring: not in enabled drivers build config 00:02:15.261 net/sfc: not in enabled drivers build config 00:02:15.261 net/softnic: not in enabled drivers build config 00:02:15.261 net/tap: not in enabled drivers build config 00:02:15.261 net/thunderx: not in enabled drivers build config 00:02:15.261 net/txgbe: not in enabled drivers build config 00:02:15.261 net/vdev_netvsc: not in enabled drivers build config 00:02:15.261 net/vhost: not in enabled drivers build config 00:02:15.261 net/virtio: not in enabled drivers build config 00:02:15.261 net/vmxnet3: not in enabled drivers build config 00:02:15.261 raw/*: missing internal dependency, "rawdev" 00:02:15.261 crypto/armv8: not in enabled drivers build config 00:02:15.261 crypto/bcmfs: not in enabled drivers build config 00:02:15.261 crypto/caam_jr: not in enabled drivers build config 00:02:15.261 crypto/ccp: not in enabled drivers build config 00:02:15.261 crypto/cnxk: not in enabled drivers build config 00:02:15.261 crypto/dpaa_sec: not in enabled drivers build config 00:02:15.261 crypto/dpaa2_sec: not in enabled drivers build config 00:02:15.261 crypto/ipsec_mb: not in enabled drivers build config 00:02:15.261 crypto/mlx5: not in enabled drivers build config 00:02:15.261 crypto/mvsam: not in enabled drivers build config 00:02:15.261 crypto/nitrox: not in enabled drivers build config 00:02:15.261 crypto/null: not in enabled drivers build config 00:02:15.261 crypto/octeontx: not in enabled drivers build config 00:02:15.261 crypto/openssl: not in enabled drivers build config 00:02:15.261 crypto/scheduler: not in enabled drivers build config 00:02:15.261 crypto/uadk: not in enabled drivers build config 00:02:15.261 crypto/virtio: not in enabled drivers build config 00:02:15.261 compress/isal: not in enabled drivers build config 00:02:15.261 compress/mlx5: not in enabled drivers build config 00:02:15.261 compress/nitrox: not in enabled drivers build config 00:02:15.261 compress/octeontx: not in enabled drivers build config 00:02:15.261 compress/zlib: not in enabled drivers build config 00:02:15.261 regex/*: missing internal dependency, "regexdev" 00:02:15.261 ml/*: missing internal dependency, "mldev" 00:02:15.261 vdpa/ifc: not in enabled drivers build config 00:02:15.261 vdpa/mlx5: not in enabled drivers build config 00:02:15.261 vdpa/nfp: not in enabled drivers build config 00:02:15.261 vdpa/sfc: not in enabled drivers build config 00:02:15.261 event/*: missing internal dependency, "eventdev" 00:02:15.261 baseband/*: missing internal dependency, "bbdev" 00:02:15.261 gpu/*: missing internal dependency, "gpudev" 00:02:15.261 00:02:15.261 00:02:15.526 Build targets in project: 85 00:02:15.526 00:02:15.526 DPDK 24.03.0 00:02:15.526 00:02:15.526 User defined options 00:02:15.526 buildtype : debug 00:02:15.526 default_library : shared 00:02:15.526 libdir : lib 00:02:15.526 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:15.526 b_sanitize : address 00:02:15.526 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:15.526 c_link_args : 00:02:15.526 cpu_instruction_set: native 00:02:15.526 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:15.526 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:15.526 enable_docs : false 00:02:15.526 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:15.526 enable_kmods : false 00:02:15.526 max_lcores : 128 00:02:15.526 tests : false 00:02:15.526 00:02:15.526 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:16.096 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:16.096 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:16.096 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:16.096 [3/268] Linking static target lib/librte_kvargs.a 00:02:16.096 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:16.096 [5/268] Linking static target lib/librte_log.a 00:02:16.096 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:16.355 [7/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:16.355 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:16.616 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:16.616 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:16.616 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:16.616 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:16.616 [13/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.616 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:16.616 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:16.616 [16/268] Linking static target lib/librte_telemetry.a 00:02:16.875 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:16.875 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:16.875 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:17.134 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:17.134 [21/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.134 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:17.134 [23/268] Linking target lib/librte_log.so.24.1 00:02:17.134 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:17.134 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:17.134 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:17.394 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:17.394 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:17.394 [29/268] Linking target lib/librte_kvargs.so.24.1 00:02:17.394 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:17.394 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.394 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:17.394 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:17.394 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:17.653 [35/268] Linking target lib/librte_telemetry.so.24.1 00:02:17.653 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:17.653 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:17.653 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:17.653 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:17.653 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:17.653 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:17.912 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:17.912 [43/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:17.912 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:17.912 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:17.912 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:18.171 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:18.171 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:18.171 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:18.432 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:18.432 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:18.432 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:18.432 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:18.432 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:18.432 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:18.432 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:18.692 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:18.692 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:18.692 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:18.692 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:18.957 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:18.957 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:18.957 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:18.957 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:18.957 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:18.957 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:18.957 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:19.224 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:19.224 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:19.483 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:19.483 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:19.483 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:19.483 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:19.483 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:19.483 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:19.483 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:19.483 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:19.744 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:19.744 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:19.744 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:19.744 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:19.744 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:20.005 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:20.005 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:20.005 [85/268] Linking static target lib/librte_ring.a 00:02:20.005 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:20.005 [87/268] Linking static target lib/librte_eal.a 00:02:20.264 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:20.264 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:20.524 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:20.524 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:20.524 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:20.524 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:20.524 [94/268] Linking static target lib/librte_mempool.a 00:02:20.524 [95/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:20.524 [96/268] Linking static target lib/librte_rcu.a 00:02:20.524 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.784 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:20.784 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:21.045 [100/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:21.045 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:21.045 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:21.045 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:21.045 [104/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.045 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:21.305 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:21.305 [107/268] Linking static target lib/librte_mbuf.a 00:02:21.305 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:21.305 [109/268] Linking static target lib/librte_net.a 00:02:21.305 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:21.305 [111/268] Linking static target lib/librte_meter.a 00:02:21.305 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:21.305 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:21.565 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:21.565 [115/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.565 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.565 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.824 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:21.824 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:21.824 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:22.083 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.083 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:22.342 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:22.342 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:22.609 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:22.609 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:22.609 [127/268] Linking static target lib/librte_pci.a 00:02:22.609 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:22.609 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:22.609 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:22.609 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:22.609 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:22.868 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:22.868 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:22.868 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:22.868 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:22.868 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:22.868 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:22.868 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:22.868 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:22.868 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:22.868 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:22.868 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:22.868 [144/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.126 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:23.126 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:23.126 [147/268] Linking static target lib/librte_cmdline.a 00:02:23.385 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:23.385 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:23.385 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:23.385 [151/268] Linking static target lib/librte_timer.a 00:02:23.385 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:23.643 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:23.643 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:23.902 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:23.902 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:23.902 [157/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.902 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:23.902 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:23.902 [160/268] Linking static target lib/librte_compressdev.a 00:02:24.161 [161/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:24.161 [162/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:24.420 [163/268] Linking static target lib/librte_ethdev.a 00:02:24.420 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:24.420 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:24.420 [166/268] Linking static target lib/librte_hash.a 00:02:24.420 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:24.420 [168/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:24.421 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:24.421 [170/268] Linking static target lib/librte_dmadev.a 00:02:24.681 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:24.681 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:24.681 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.941 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.941 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:24.941 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:24.941 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:25.199 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:25.199 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:25.199 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:25.199 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.458 [182/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:25.458 [183/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.458 [184/268] Linking static target lib/librte_cryptodev.a 00:02:25.458 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:25.458 [186/268] Linking static target lib/librte_power.a 00:02:25.716 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:25.716 [188/268] Linking static target lib/librte_reorder.a 00:02:25.716 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:25.716 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:25.716 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:25.974 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:25.974 [193/268] Linking static target lib/librte_security.a 00:02:26.233 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.233 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:26.493 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.750 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:26.750 [198/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.750 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:26.750 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:26.750 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:27.084 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:27.084 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:27.343 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:27.343 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:27.343 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:27.343 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:27.343 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:27.604 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:27.604 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:27.604 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.604 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:27.604 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:27.604 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:27.604 [215/268] Linking static target drivers/librte_bus_vdev.a 00:02:27.864 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:27.864 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:27.864 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:27.864 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:27.864 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:27.864 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:27.864 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.124 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:28.124 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:28.124 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:28.124 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:28.124 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.057 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:30.438 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.438 [230/268] Linking target lib/librte_eal.so.24.1 00:02:30.438 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:30.438 [232/268] Linking target lib/librte_pci.so.24.1 00:02:30.438 [233/268] Linking target lib/librte_meter.so.24.1 00:02:30.438 [234/268] Linking target lib/librte_ring.so.24.1 00:02:30.438 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:30.438 [236/268] Linking target lib/librte_timer.so.24.1 00:02:30.438 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:30.698 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:30.698 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:30.698 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:30.698 [241/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:30.698 [242/268] Linking target lib/librte_rcu.so.24.1 00:02:30.698 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:30.698 [244/268] Linking target lib/librte_mempool.so.24.1 00:02:30.698 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:30.698 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:30.698 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:30.698 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:30.958 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:30.958 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:30.958 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:30.958 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:30.958 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:02:30.958 [254/268] Linking target lib/librte_net.so.24.1 00:02:31.218 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:31.218 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:31.218 [257/268] Linking target lib/librte_security.so.24.1 00:02:31.218 [258/268] Linking target lib/librte_hash.so.24.1 00:02:31.218 [259/268] Linking target lib/librte_cmdline.so.24.1 00:02:31.477 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:32.857 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.857 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:32.857 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:32.857 [264/268] Linking target lib/librte_power.so.24.1 00:02:33.117 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:33.117 [266/268] Linking static target lib/librte_vhost.a 00:02:35.654 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.654 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:35.654 INFO: autodetecting backend as ninja 00:02:35.654 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:53.758 CC lib/ut/ut.o 00:02:53.758 CC lib/log/log_flags.o 00:02:53.758 CC lib/log/log.o 00:02:53.758 CC lib/log/log_deprecated.o 00:02:53.758 CC lib/ut_mock/mock.o 00:02:53.758 LIB libspdk_ut.a 00:02:53.758 LIB libspdk_log.a 00:02:53.758 SO libspdk_ut.so.2.0 00:02:53.758 LIB libspdk_ut_mock.a 00:02:53.758 SO libspdk_log.so.7.1 00:02:53.758 SO libspdk_ut_mock.so.6.0 00:02:53.758 SYMLINK libspdk_ut.so 00:02:53.758 SYMLINK libspdk_ut_mock.so 00:02:53.758 SYMLINK libspdk_log.so 00:02:53.758 CC lib/dma/dma.o 00:02:53.758 CC lib/util/bit_array.o 00:02:53.758 CC lib/util/cpuset.o 00:02:53.758 CC lib/util/base64.o 00:02:53.758 CC lib/util/crc32c.o 00:02:53.758 CC lib/util/crc32.o 00:02:53.758 CC lib/util/crc16.o 00:02:53.758 CC lib/ioat/ioat.o 00:02:53.758 CXX lib/trace_parser/trace.o 00:02:53.758 CC lib/vfio_user/host/vfio_user_pci.o 00:02:53.758 CC lib/vfio_user/host/vfio_user.o 00:02:53.758 CC lib/util/crc32_ieee.o 00:02:53.758 CC lib/util/crc64.o 00:02:53.758 CC lib/util/dif.o 00:02:53.758 LIB libspdk_dma.a 00:02:53.758 CC lib/util/fd.o 00:02:53.758 CC lib/util/fd_group.o 00:02:53.758 SO libspdk_dma.so.5.0 00:02:53.758 CC lib/util/file.o 00:02:53.758 CC lib/util/hexlify.o 00:02:53.758 SYMLINK libspdk_dma.so 00:02:53.758 CC lib/util/iov.o 00:02:53.758 LIB libspdk_ioat.a 00:02:53.758 SO libspdk_ioat.so.7.0 00:02:53.758 CC lib/util/math.o 00:02:53.758 CC lib/util/net.o 00:02:53.758 LIB libspdk_vfio_user.a 00:02:53.758 SYMLINK libspdk_ioat.so 00:02:53.758 CC lib/util/pipe.o 00:02:53.758 SO libspdk_vfio_user.so.5.0 00:02:53.758 CC lib/util/strerror_tls.o 00:02:53.758 CC lib/util/string.o 00:02:53.758 SYMLINK libspdk_vfio_user.so 00:02:53.758 CC lib/util/uuid.o 00:02:53.758 CC lib/util/xor.o 00:02:53.758 CC lib/util/zipf.o 00:02:53.758 CC lib/util/md5.o 00:02:53.758 LIB libspdk_util.a 00:02:54.017 SO libspdk_util.so.10.1 00:02:54.017 SYMLINK libspdk_util.so 00:02:54.017 LIB libspdk_trace_parser.a 00:02:54.276 SO libspdk_trace_parser.so.6.0 00:02:54.276 CC lib/vmd/vmd.o 00:02:54.276 CC lib/vmd/led.o 00:02:54.276 SYMLINK libspdk_trace_parser.so 00:02:54.276 CC lib/conf/conf.o 00:02:54.276 CC lib/env_dpdk/env.o 00:02:54.276 CC lib/env_dpdk/pci.o 00:02:54.276 CC lib/env_dpdk/memory.o 00:02:54.276 CC lib/env_dpdk/init.o 00:02:54.277 CC lib/idxd/idxd.o 00:02:54.277 CC lib/json/json_parse.o 00:02:54.277 CC lib/rdma_utils/rdma_utils.o 00:02:54.537 CC lib/json/json_util.o 00:02:54.537 LIB libspdk_conf.a 00:02:54.537 SO libspdk_conf.so.6.0 00:02:54.537 CC lib/json/json_write.o 00:02:54.537 LIB libspdk_rdma_utils.a 00:02:54.537 SYMLINK libspdk_conf.so 00:02:54.537 CC lib/env_dpdk/threads.o 00:02:54.537 SO libspdk_rdma_utils.so.1.0 00:02:54.537 SYMLINK libspdk_rdma_utils.so 00:02:54.537 CC lib/env_dpdk/pci_ioat.o 00:02:54.537 CC lib/env_dpdk/pci_virtio.o 00:02:54.537 CC lib/env_dpdk/pci_vmd.o 00:02:54.797 CC lib/env_dpdk/pci_idxd.o 00:02:54.797 CC lib/env_dpdk/pci_event.o 00:02:54.797 CC lib/env_dpdk/sigbus_handler.o 00:02:54.797 CC lib/env_dpdk/pci_dpdk.o 00:02:54.797 LIB libspdk_json.a 00:02:54.797 CC lib/rdma_provider/common.o 00:02:54.797 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:54.797 SO libspdk_json.so.6.0 00:02:54.797 SYMLINK libspdk_json.so 00:02:54.797 CC lib/idxd/idxd_user.o 00:02:54.797 CC lib/idxd/idxd_kernel.o 00:02:54.797 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:55.056 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:55.056 LIB libspdk_rdma_provider.a 00:02:55.056 LIB libspdk_vmd.a 00:02:55.056 SO libspdk_rdma_provider.so.7.0 00:02:55.056 SO libspdk_vmd.so.6.0 00:02:55.056 SYMLINK libspdk_rdma_provider.so 00:02:55.056 CC lib/jsonrpc/jsonrpc_server.o 00:02:55.056 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:55.056 CC lib/jsonrpc/jsonrpc_client.o 00:02:55.056 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:55.056 LIB libspdk_idxd.a 00:02:55.056 SYMLINK libspdk_vmd.so 00:02:55.315 SO libspdk_idxd.so.12.1 00:02:55.315 SYMLINK libspdk_idxd.so 00:02:55.315 LIB libspdk_jsonrpc.a 00:02:55.576 SO libspdk_jsonrpc.so.6.0 00:02:55.576 SYMLINK libspdk_jsonrpc.so 00:02:55.837 CC lib/rpc/rpc.o 00:02:56.119 LIB libspdk_env_dpdk.a 00:02:56.119 SO libspdk_env_dpdk.so.15.1 00:02:56.119 LIB libspdk_rpc.a 00:02:56.119 SO libspdk_rpc.so.6.0 00:02:56.119 SYMLINK libspdk_env_dpdk.so 00:02:56.378 SYMLINK libspdk_rpc.so 00:02:56.637 CC lib/keyring/keyring.o 00:02:56.637 CC lib/keyring/keyring_rpc.o 00:02:56.637 CC lib/trace/trace_flags.o 00:02:56.637 CC lib/notify/notify.o 00:02:56.637 CC lib/notify/notify_rpc.o 00:02:56.637 CC lib/trace/trace.o 00:02:56.637 CC lib/trace/trace_rpc.o 00:02:56.897 LIB libspdk_notify.a 00:02:56.897 LIB libspdk_keyring.a 00:02:56.897 SO libspdk_notify.so.6.0 00:02:56.897 SO libspdk_keyring.so.2.0 00:02:56.897 LIB libspdk_trace.a 00:02:56.897 SYMLINK libspdk_notify.so 00:02:56.897 SYMLINK libspdk_keyring.so 00:02:56.897 SO libspdk_trace.so.11.0 00:02:56.897 SYMLINK libspdk_trace.so 00:02:57.466 CC lib/sock/sock.o 00:02:57.466 CC lib/sock/sock_rpc.o 00:02:57.466 CC lib/thread/thread.o 00:02:57.466 CC lib/thread/iobuf.o 00:02:57.725 LIB libspdk_sock.a 00:02:57.725 SO libspdk_sock.so.10.0 00:02:57.985 SYMLINK libspdk_sock.so 00:02:58.244 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:58.244 CC lib/nvme/nvme_ctrlr.o 00:02:58.244 CC lib/nvme/nvme_fabric.o 00:02:58.244 CC lib/nvme/nvme_ns.o 00:02:58.244 CC lib/nvme/nvme_ns_cmd.o 00:02:58.244 CC lib/nvme/nvme_pcie_common.o 00:02:58.244 CC lib/nvme/nvme_pcie.o 00:02:58.244 CC lib/nvme/nvme.o 00:02:58.244 CC lib/nvme/nvme_qpair.o 00:02:59.181 CC lib/nvme/nvme_quirks.o 00:02:59.181 CC lib/nvme/nvme_transport.o 00:02:59.181 LIB libspdk_thread.a 00:02:59.181 CC lib/nvme/nvme_discovery.o 00:02:59.181 SO libspdk_thread.so.11.0 00:02:59.181 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:59.181 SYMLINK libspdk_thread.so 00:02:59.181 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:59.181 CC lib/nvme/nvme_tcp.o 00:02:59.181 CC lib/nvme/nvme_opal.o 00:02:59.181 CC lib/nvme/nvme_io_msg.o 00:02:59.439 CC lib/nvme/nvme_poll_group.o 00:02:59.439 CC lib/nvme/nvme_zns.o 00:02:59.698 CC lib/nvme/nvme_stubs.o 00:02:59.698 CC lib/nvme/nvme_auth.o 00:02:59.698 CC lib/nvme/nvme_cuse.o 00:02:59.698 CC lib/accel/accel.o 00:02:59.957 CC lib/accel/accel_rpc.o 00:02:59.957 CC lib/nvme/nvme_rdma.o 00:02:59.957 CC lib/accel/accel_sw.o 00:03:00.221 CC lib/blob/blobstore.o 00:03:00.221 CC lib/init/json_config.o 00:03:00.221 CC lib/virtio/virtio.o 00:03:00.493 CC lib/virtio/virtio_vhost_user.o 00:03:00.493 CC lib/init/subsystem.o 00:03:00.761 CC lib/init/subsystem_rpc.o 00:03:00.761 CC lib/blob/request.o 00:03:00.761 CC lib/blob/zeroes.o 00:03:00.761 CC lib/blob/blob_bs_dev.o 00:03:00.761 CC lib/virtio/virtio_vfio_user.o 00:03:00.761 CC lib/init/rpc.o 00:03:01.021 CC lib/virtio/virtio_pci.o 00:03:01.021 LIB libspdk_init.a 00:03:01.021 SO libspdk_init.so.6.0 00:03:01.021 CC lib/fsdev/fsdev.o 00:03:01.021 CC lib/fsdev/fsdev_rpc.o 00:03:01.021 CC lib/fsdev/fsdev_io.o 00:03:01.021 LIB libspdk_accel.a 00:03:01.280 SO libspdk_accel.so.16.0 00:03:01.280 SYMLINK libspdk_init.so 00:03:01.280 SYMLINK libspdk_accel.so 00:03:01.280 LIB libspdk_virtio.a 00:03:01.280 SO libspdk_virtio.so.7.0 00:03:01.539 SYMLINK libspdk_virtio.so 00:03:01.539 LIB libspdk_nvme.a 00:03:01.539 CC lib/event/app.o 00:03:01.539 CC lib/event/reactor.o 00:03:01.539 CC lib/event/scheduler_static.o 00:03:01.539 CC lib/event/log_rpc.o 00:03:01.539 CC lib/event/app_rpc.o 00:03:01.539 CC lib/bdev/bdev.o 00:03:01.539 CC lib/bdev/bdev_rpc.o 00:03:01.539 CC lib/bdev/bdev_zone.o 00:03:01.539 CC lib/bdev/part.o 00:03:01.539 SO libspdk_nvme.so.15.0 00:03:01.798 CC lib/bdev/scsi_nvme.o 00:03:01.798 LIB libspdk_fsdev.a 00:03:01.798 SO libspdk_fsdev.so.2.0 00:03:01.798 SYMLINK libspdk_nvme.so 00:03:02.057 SYMLINK libspdk_fsdev.so 00:03:02.057 LIB libspdk_event.a 00:03:02.057 SO libspdk_event.so.14.0 00:03:02.057 SYMLINK libspdk_event.so 00:03:02.316 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:02.884 LIB libspdk_fuse_dispatcher.a 00:03:02.884 SO libspdk_fuse_dispatcher.so.1.0 00:03:03.144 SYMLINK libspdk_fuse_dispatcher.so 00:03:03.712 LIB libspdk_blob.a 00:03:03.971 SO libspdk_blob.so.11.0 00:03:03.971 SYMLINK libspdk_blob.so 00:03:04.231 LIB libspdk_bdev.a 00:03:04.490 CC lib/blobfs/tree.o 00:03:04.490 CC lib/blobfs/blobfs.o 00:03:04.490 CC lib/lvol/lvol.o 00:03:04.490 SO libspdk_bdev.so.17.0 00:03:04.490 SYMLINK libspdk_bdev.so 00:03:04.748 CC lib/nvmf/ctrlr_bdev.o 00:03:04.748 CC lib/nvmf/ctrlr.o 00:03:04.748 CC lib/nvmf/subsystem.o 00:03:04.748 CC lib/nvmf/ctrlr_discovery.o 00:03:04.748 CC lib/scsi/dev.o 00:03:04.748 CC lib/ublk/ublk.o 00:03:04.748 CC lib/ftl/ftl_core.o 00:03:04.748 CC lib/nbd/nbd.o 00:03:05.008 CC lib/scsi/lun.o 00:03:05.267 CC lib/ftl/ftl_init.o 00:03:05.267 CC lib/nbd/nbd_rpc.o 00:03:05.267 CC lib/scsi/port.o 00:03:05.267 CC lib/ublk/ublk_rpc.o 00:03:05.267 LIB libspdk_blobfs.a 00:03:05.267 SO libspdk_blobfs.so.10.0 00:03:05.267 LIB libspdk_nbd.a 00:03:05.267 CC lib/ftl/ftl_layout.o 00:03:05.539 CC lib/scsi/scsi.o 00:03:05.540 SO libspdk_nbd.so.7.0 00:03:05.540 SYMLINK libspdk_blobfs.so 00:03:05.540 CC lib/nvmf/nvmf.o 00:03:05.540 CC lib/nvmf/nvmf_rpc.o 00:03:05.540 LIB libspdk_ublk.a 00:03:05.540 LIB libspdk_lvol.a 00:03:05.540 SYMLINK libspdk_nbd.so 00:03:05.540 CC lib/nvmf/transport.o 00:03:05.540 SO libspdk_lvol.so.10.0 00:03:05.540 SO libspdk_ublk.so.3.0 00:03:05.540 CC lib/nvmf/tcp.o 00:03:05.540 CC lib/scsi/scsi_bdev.o 00:03:05.540 SYMLINK libspdk_lvol.so 00:03:05.540 SYMLINK libspdk_ublk.so 00:03:05.540 CC lib/nvmf/stubs.o 00:03:05.540 CC lib/nvmf/mdns_server.o 00:03:05.808 CC lib/ftl/ftl_debug.o 00:03:06.067 CC lib/ftl/ftl_io.o 00:03:06.067 CC lib/nvmf/rdma.o 00:03:06.067 CC lib/nvmf/auth.o 00:03:06.067 CC lib/scsi/scsi_pr.o 00:03:06.067 CC lib/ftl/ftl_sb.o 00:03:06.326 CC lib/ftl/ftl_l2p.o 00:03:06.326 CC lib/ftl/ftl_l2p_flat.o 00:03:06.326 CC lib/ftl/ftl_nv_cache.o 00:03:06.326 CC lib/ftl/ftl_band.o 00:03:06.326 CC lib/scsi/scsi_rpc.o 00:03:06.586 CC lib/scsi/task.o 00:03:06.586 CC lib/ftl/ftl_band_ops.o 00:03:06.586 CC lib/ftl/ftl_writer.o 00:03:06.586 CC lib/ftl/ftl_rq.o 00:03:06.586 LIB libspdk_scsi.a 00:03:06.845 SO libspdk_scsi.so.9.0 00:03:06.845 CC lib/ftl/ftl_reloc.o 00:03:06.845 CC lib/ftl/ftl_l2p_cache.o 00:03:06.845 CC lib/ftl/ftl_p2l.o 00:03:06.845 CC lib/ftl/ftl_p2l_log.o 00:03:06.845 CC lib/ftl/mngt/ftl_mngt.o 00:03:06.845 SYMLINK libspdk_scsi.so 00:03:06.845 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:07.104 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:07.104 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:07.104 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:07.104 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:07.364 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:07.364 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:07.364 CC lib/vhost/vhost.o 00:03:07.364 CC lib/iscsi/conn.o 00:03:07.364 CC lib/iscsi/init_grp.o 00:03:07.364 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:07.364 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:07.364 CC lib/iscsi/iscsi.o 00:03:07.364 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:07.364 CC lib/iscsi/param.o 00:03:07.623 CC lib/iscsi/portal_grp.o 00:03:07.623 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:07.623 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:07.623 CC lib/iscsi/tgt_node.o 00:03:07.883 CC lib/ftl/utils/ftl_conf.o 00:03:07.883 CC lib/iscsi/iscsi_subsystem.o 00:03:07.883 CC lib/iscsi/iscsi_rpc.o 00:03:07.883 CC lib/vhost/vhost_rpc.o 00:03:07.883 CC lib/ftl/utils/ftl_md.o 00:03:08.142 CC lib/iscsi/task.o 00:03:08.142 CC lib/ftl/utils/ftl_mempool.o 00:03:08.142 CC lib/ftl/utils/ftl_bitmap.o 00:03:08.142 CC lib/vhost/vhost_scsi.o 00:03:08.142 CC lib/vhost/vhost_blk.o 00:03:08.407 CC lib/vhost/rte_vhost_user.o 00:03:08.407 CC lib/ftl/utils/ftl_property.o 00:03:08.407 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:08.407 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:08.407 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:08.407 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:08.407 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:08.672 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:08.672 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:08.672 LIB libspdk_nvmf.a 00:03:08.672 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:08.672 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:08.672 SO libspdk_nvmf.so.20.0 00:03:08.672 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:08.672 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:08.672 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:08.931 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:08.931 CC lib/ftl/base/ftl_base_dev.o 00:03:08.931 SYMLINK libspdk_nvmf.so 00:03:08.931 CC lib/ftl/base/ftl_base_bdev.o 00:03:08.931 CC lib/ftl/ftl_trace.o 00:03:08.931 LIB libspdk_iscsi.a 00:03:09.190 SO libspdk_iscsi.so.8.0 00:03:09.190 LIB libspdk_ftl.a 00:03:09.190 SYMLINK libspdk_iscsi.so 00:03:09.449 LIB libspdk_vhost.a 00:03:09.449 SO libspdk_ftl.so.9.0 00:03:09.449 SO libspdk_vhost.so.8.0 00:03:09.708 SYMLINK libspdk_vhost.so 00:03:09.708 SYMLINK libspdk_ftl.so 00:03:09.967 CC module/env_dpdk/env_dpdk_rpc.o 00:03:09.967 CC module/blob/bdev/blob_bdev.o 00:03:09.967 CC module/keyring/linux/keyring.o 00:03:09.967 CC module/accel/ioat/accel_ioat.o 00:03:09.967 CC module/accel/dsa/accel_dsa.o 00:03:10.227 CC module/fsdev/aio/fsdev_aio.o 00:03:10.227 CC module/sock/posix/posix.o 00:03:10.227 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:10.227 CC module/accel/error/accel_error.o 00:03:10.227 CC module/keyring/file/keyring.o 00:03:10.227 LIB libspdk_env_dpdk_rpc.a 00:03:10.227 SO libspdk_env_dpdk_rpc.so.6.0 00:03:10.227 SYMLINK libspdk_env_dpdk_rpc.so 00:03:10.227 CC module/accel/error/accel_error_rpc.o 00:03:10.227 CC module/keyring/linux/keyring_rpc.o 00:03:10.227 CC module/keyring/file/keyring_rpc.o 00:03:10.227 CC module/accel/ioat/accel_ioat_rpc.o 00:03:10.227 LIB libspdk_scheduler_dynamic.a 00:03:10.227 SO libspdk_scheduler_dynamic.so.4.0 00:03:10.227 LIB libspdk_keyring_linux.a 00:03:10.485 LIB libspdk_accel_error.a 00:03:10.485 LIB libspdk_blob_bdev.a 00:03:10.485 SYMLINK libspdk_scheduler_dynamic.so 00:03:10.485 SO libspdk_keyring_linux.so.1.0 00:03:10.485 LIB libspdk_keyring_file.a 00:03:10.485 CC module/accel/dsa/accel_dsa_rpc.o 00:03:10.485 SO libspdk_accel_error.so.2.0 00:03:10.485 SO libspdk_blob_bdev.so.11.0 00:03:10.485 SO libspdk_keyring_file.so.2.0 00:03:10.485 LIB libspdk_accel_ioat.a 00:03:10.485 SO libspdk_accel_ioat.so.6.0 00:03:10.485 SYMLINK libspdk_keyring_linux.so 00:03:10.485 SYMLINK libspdk_blob_bdev.so 00:03:10.485 SYMLINK libspdk_accel_error.so 00:03:10.485 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:10.485 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:10.485 CC module/fsdev/aio/linux_aio_mgr.o 00:03:10.485 SYMLINK libspdk_keyring_file.so 00:03:10.485 SYMLINK libspdk_accel_ioat.so 00:03:10.485 LIB libspdk_accel_dsa.a 00:03:10.485 CC module/scheduler/gscheduler/gscheduler.o 00:03:10.485 SO libspdk_accel_dsa.so.5.0 00:03:10.744 CC module/accel/iaa/accel_iaa.o 00:03:10.744 SYMLINK libspdk_accel_dsa.so 00:03:10.744 LIB libspdk_scheduler_dpdk_governor.a 00:03:10.744 CC module/accel/iaa/accel_iaa_rpc.o 00:03:10.744 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:10.744 CC module/bdev/delay/vbdev_delay.o 00:03:10.744 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:10.744 LIB libspdk_scheduler_gscheduler.a 00:03:10.744 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:10.744 SO libspdk_scheduler_gscheduler.so.4.0 00:03:10.744 CC module/blobfs/bdev/blobfs_bdev.o 00:03:10.744 CC module/bdev/error/vbdev_error.o 00:03:10.744 CC module/bdev/error/vbdev_error_rpc.o 00:03:10.744 CC module/bdev/gpt/gpt.o 00:03:10.744 SYMLINK libspdk_scheduler_gscheduler.so 00:03:10.744 CC module/bdev/gpt/vbdev_gpt.o 00:03:11.003 LIB libspdk_fsdev_aio.a 00:03:11.003 LIB libspdk_accel_iaa.a 00:03:11.003 SO libspdk_accel_iaa.so.3.0 00:03:11.003 SO libspdk_fsdev_aio.so.1.0 00:03:11.003 LIB libspdk_sock_posix.a 00:03:11.003 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:11.003 SYMLINK libspdk_accel_iaa.so 00:03:11.003 SO libspdk_sock_posix.so.6.0 00:03:11.003 SYMLINK libspdk_fsdev_aio.so 00:03:11.003 SYMLINK libspdk_sock_posix.so 00:03:11.003 LIB libspdk_bdev_error.a 00:03:11.003 CC module/bdev/lvol/vbdev_lvol.o 00:03:11.003 LIB libspdk_blobfs_bdev.a 00:03:11.003 SO libspdk_bdev_error.so.6.0 00:03:11.262 CC module/bdev/malloc/bdev_malloc.o 00:03:11.262 CC module/bdev/null/bdev_null.o 00:03:11.262 CC module/bdev/nvme/bdev_nvme.o 00:03:11.262 LIB libspdk_bdev_gpt.a 00:03:11.262 SO libspdk_blobfs_bdev.so.6.0 00:03:11.262 CC module/bdev/passthru/vbdev_passthru.o 00:03:11.262 SO libspdk_bdev_gpt.so.6.0 00:03:11.262 SYMLINK libspdk_bdev_error.so 00:03:11.262 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:11.262 CC module/bdev/raid/bdev_raid.o 00:03:11.262 LIB libspdk_bdev_delay.a 00:03:11.262 SYMLINK libspdk_blobfs_bdev.so 00:03:11.262 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:11.262 SYMLINK libspdk_bdev_gpt.so 00:03:11.262 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:11.262 SO libspdk_bdev_delay.so.6.0 00:03:11.262 SYMLINK libspdk_bdev_delay.so 00:03:11.262 CC module/bdev/null/bdev_null_rpc.o 00:03:11.262 CC module/bdev/raid/bdev_raid_rpc.o 00:03:11.521 CC module/bdev/raid/bdev_raid_sb.o 00:03:11.521 CC module/bdev/raid/raid0.o 00:03:11.521 LIB libspdk_bdev_null.a 00:03:11.521 LIB libspdk_bdev_passthru.a 00:03:11.521 SO libspdk_bdev_null.so.6.0 00:03:11.521 SO libspdk_bdev_passthru.so.6.0 00:03:11.521 LIB libspdk_bdev_malloc.a 00:03:11.521 SYMLINK libspdk_bdev_null.so 00:03:11.521 CC module/bdev/nvme/nvme_rpc.o 00:03:11.521 SYMLINK libspdk_bdev_passthru.so 00:03:11.521 SO libspdk_bdev_malloc.so.6.0 00:03:11.521 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:11.521 CC module/bdev/raid/raid1.o 00:03:11.781 SYMLINK libspdk_bdev_malloc.so 00:03:11.781 CC module/bdev/nvme/bdev_mdns_client.o 00:03:11.781 CC module/bdev/nvme/vbdev_opal.o 00:03:11.781 CC module/bdev/raid/concat.o 00:03:11.781 CC module/bdev/raid/raid5f.o 00:03:11.781 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:12.040 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:12.040 CC module/bdev/split/vbdev_split.o 00:03:12.040 LIB libspdk_bdev_lvol.a 00:03:12.040 SO libspdk_bdev_lvol.so.6.0 00:03:12.040 CC module/bdev/aio/bdev_aio.o 00:03:12.040 CC module/bdev/split/vbdev_split_rpc.o 00:03:12.040 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:12.040 CC module/bdev/ftl/bdev_ftl.o 00:03:12.040 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:12.040 SYMLINK libspdk_bdev_lvol.so 00:03:12.300 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:12.300 LIB libspdk_bdev_split.a 00:03:12.300 CC module/bdev/iscsi/bdev_iscsi.o 00:03:12.300 SO libspdk_bdev_split.so.6.0 00:03:12.300 CC module/bdev/aio/bdev_aio_rpc.o 00:03:12.300 SYMLINK libspdk_bdev_split.so 00:03:12.300 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:12.300 LIB libspdk_bdev_raid.a 00:03:12.300 LIB libspdk_bdev_ftl.a 00:03:12.560 SO libspdk_bdev_ftl.so.6.0 00:03:12.560 SO libspdk_bdev_raid.so.6.0 00:03:12.560 LIB libspdk_bdev_zone_block.a 00:03:12.560 SO libspdk_bdev_zone_block.so.6.0 00:03:12.560 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:12.560 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:12.560 LIB libspdk_bdev_aio.a 00:03:12.560 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:12.560 SYMLINK libspdk_bdev_ftl.so 00:03:12.560 SYMLINK libspdk_bdev_raid.so 00:03:12.560 SO libspdk_bdev_aio.so.6.0 00:03:12.560 SYMLINK libspdk_bdev_zone_block.so 00:03:12.560 SYMLINK libspdk_bdev_aio.so 00:03:12.820 LIB libspdk_bdev_iscsi.a 00:03:12.820 SO libspdk_bdev_iscsi.so.6.0 00:03:12.820 SYMLINK libspdk_bdev_iscsi.so 00:03:13.078 LIB libspdk_bdev_virtio.a 00:03:13.078 SO libspdk_bdev_virtio.so.6.0 00:03:13.337 SYMLINK libspdk_bdev_virtio.so 00:03:13.908 LIB libspdk_bdev_nvme.a 00:03:14.168 SO libspdk_bdev_nvme.so.7.1 00:03:14.168 SYMLINK libspdk_bdev_nvme.so 00:03:14.736 CC module/event/subsystems/iobuf/iobuf.o 00:03:14.736 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:14.736 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:14.736 CC module/event/subsystems/scheduler/scheduler.o 00:03:14.736 CC module/event/subsystems/fsdev/fsdev.o 00:03:14.736 CC module/event/subsystems/keyring/keyring.o 00:03:14.736 CC module/event/subsystems/sock/sock.o 00:03:14.736 CC module/event/subsystems/vmd/vmd.o 00:03:14.736 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:14.995 LIB libspdk_event_fsdev.a 00:03:14.995 LIB libspdk_event_vhost_blk.a 00:03:14.995 LIB libspdk_event_keyring.a 00:03:14.995 LIB libspdk_event_sock.a 00:03:14.995 LIB libspdk_event_iobuf.a 00:03:14.995 LIB libspdk_event_scheduler.a 00:03:14.995 SO libspdk_event_fsdev.so.1.0 00:03:14.995 SO libspdk_event_keyring.so.1.0 00:03:14.995 SO libspdk_event_sock.so.5.0 00:03:14.995 LIB libspdk_event_vmd.a 00:03:14.995 SO libspdk_event_vhost_blk.so.3.0 00:03:14.995 SO libspdk_event_iobuf.so.3.0 00:03:14.995 SO libspdk_event_vmd.so.6.0 00:03:14.995 SO libspdk_event_scheduler.so.4.0 00:03:14.995 SYMLINK libspdk_event_fsdev.so 00:03:14.995 SYMLINK libspdk_event_sock.so 00:03:14.995 SYMLINK libspdk_event_keyring.so 00:03:14.995 SYMLINK libspdk_event_vhost_blk.so 00:03:14.995 SYMLINK libspdk_event_iobuf.so 00:03:14.995 SYMLINK libspdk_event_scheduler.so 00:03:14.996 SYMLINK libspdk_event_vmd.so 00:03:15.254 CC module/event/subsystems/accel/accel.o 00:03:15.513 LIB libspdk_event_accel.a 00:03:15.513 SO libspdk_event_accel.so.6.0 00:03:15.772 SYMLINK libspdk_event_accel.so 00:03:16.031 CC module/event/subsystems/bdev/bdev.o 00:03:16.289 LIB libspdk_event_bdev.a 00:03:16.289 SO libspdk_event_bdev.so.6.0 00:03:16.289 SYMLINK libspdk_event_bdev.so 00:03:16.548 CC module/event/subsystems/nbd/nbd.o 00:03:16.548 CC module/event/subsystems/ublk/ublk.o 00:03:16.548 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:16.548 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:16.548 CC module/event/subsystems/scsi/scsi.o 00:03:16.806 LIB libspdk_event_nbd.a 00:03:16.806 LIB libspdk_event_ublk.a 00:03:16.806 SO libspdk_event_nbd.so.6.0 00:03:16.806 SO libspdk_event_ublk.so.3.0 00:03:16.806 LIB libspdk_event_scsi.a 00:03:16.806 SO libspdk_event_scsi.so.6.0 00:03:16.806 SYMLINK libspdk_event_nbd.so 00:03:16.806 SYMLINK libspdk_event_ublk.so 00:03:16.806 SYMLINK libspdk_event_scsi.so 00:03:16.806 LIB libspdk_event_nvmf.a 00:03:17.064 SO libspdk_event_nvmf.so.6.0 00:03:17.064 SYMLINK libspdk_event_nvmf.so 00:03:17.322 CC module/event/subsystems/iscsi/iscsi.o 00:03:17.322 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:17.322 LIB libspdk_event_iscsi.a 00:03:17.322 LIB libspdk_event_vhost_scsi.a 00:03:17.322 SO libspdk_event_iscsi.so.6.0 00:03:17.322 SO libspdk_event_vhost_scsi.so.3.0 00:03:17.580 SYMLINK libspdk_event_iscsi.so 00:03:17.580 SYMLINK libspdk_event_vhost_scsi.so 00:03:17.580 SO libspdk.so.6.0 00:03:17.580 SYMLINK libspdk.so 00:03:17.850 TEST_HEADER include/spdk/accel.h 00:03:17.850 TEST_HEADER include/spdk/accel_module.h 00:03:17.850 TEST_HEADER include/spdk/assert.h 00:03:17.850 TEST_HEADER include/spdk/barrier.h 00:03:17.850 TEST_HEADER include/spdk/base64.h 00:03:17.850 TEST_HEADER include/spdk/bdev.h 00:03:17.850 TEST_HEADER include/spdk/bdev_module.h 00:03:17.850 TEST_HEADER include/spdk/bdev_zone.h 00:03:17.850 TEST_HEADER include/spdk/bit_array.h 00:03:17.850 TEST_HEADER include/spdk/bit_pool.h 00:03:17.850 TEST_HEADER include/spdk/blob_bdev.h 00:03:17.850 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:17.850 CXX app/trace/trace.o 00:03:18.109 TEST_HEADER include/spdk/blobfs.h 00:03:18.109 CC test/rpc_client/rpc_client_test.o 00:03:18.109 TEST_HEADER include/spdk/blob.h 00:03:18.109 TEST_HEADER include/spdk/conf.h 00:03:18.109 TEST_HEADER include/spdk/config.h 00:03:18.109 TEST_HEADER include/spdk/cpuset.h 00:03:18.109 TEST_HEADER include/spdk/crc16.h 00:03:18.109 TEST_HEADER include/spdk/crc32.h 00:03:18.109 TEST_HEADER include/spdk/crc64.h 00:03:18.109 TEST_HEADER include/spdk/dif.h 00:03:18.109 TEST_HEADER include/spdk/dma.h 00:03:18.109 TEST_HEADER include/spdk/endian.h 00:03:18.109 TEST_HEADER include/spdk/env_dpdk.h 00:03:18.109 TEST_HEADER include/spdk/env.h 00:03:18.109 TEST_HEADER include/spdk/event.h 00:03:18.109 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:18.109 TEST_HEADER include/spdk/fd_group.h 00:03:18.109 TEST_HEADER include/spdk/fd.h 00:03:18.109 TEST_HEADER include/spdk/file.h 00:03:18.109 TEST_HEADER include/spdk/fsdev.h 00:03:18.109 TEST_HEADER include/spdk/fsdev_module.h 00:03:18.109 TEST_HEADER include/spdk/ftl.h 00:03:18.109 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:18.109 TEST_HEADER include/spdk/gpt_spec.h 00:03:18.109 TEST_HEADER include/spdk/hexlify.h 00:03:18.109 TEST_HEADER include/spdk/histogram_data.h 00:03:18.109 TEST_HEADER include/spdk/idxd.h 00:03:18.109 TEST_HEADER include/spdk/idxd_spec.h 00:03:18.109 TEST_HEADER include/spdk/init.h 00:03:18.109 CC examples/util/zipf/zipf.o 00:03:18.109 CC test/thread/poller_perf/poller_perf.o 00:03:18.109 TEST_HEADER include/spdk/ioat.h 00:03:18.109 TEST_HEADER include/spdk/ioat_spec.h 00:03:18.109 TEST_HEADER include/spdk/iscsi_spec.h 00:03:18.109 TEST_HEADER include/spdk/json.h 00:03:18.109 TEST_HEADER include/spdk/jsonrpc.h 00:03:18.109 TEST_HEADER include/spdk/keyring.h 00:03:18.109 TEST_HEADER include/spdk/keyring_module.h 00:03:18.109 TEST_HEADER include/spdk/likely.h 00:03:18.109 TEST_HEADER include/spdk/log.h 00:03:18.109 TEST_HEADER include/spdk/lvol.h 00:03:18.109 TEST_HEADER include/spdk/md5.h 00:03:18.109 TEST_HEADER include/spdk/memory.h 00:03:18.109 TEST_HEADER include/spdk/mmio.h 00:03:18.109 CC examples/ioat/perf/perf.o 00:03:18.109 TEST_HEADER include/spdk/nbd.h 00:03:18.109 TEST_HEADER include/spdk/net.h 00:03:18.109 TEST_HEADER include/spdk/notify.h 00:03:18.109 TEST_HEADER include/spdk/nvme.h 00:03:18.109 TEST_HEADER include/spdk/nvme_intel.h 00:03:18.109 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:18.109 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:18.109 TEST_HEADER include/spdk/nvme_spec.h 00:03:18.109 TEST_HEADER include/spdk/nvme_zns.h 00:03:18.109 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:18.109 CC test/app/bdev_svc/bdev_svc.o 00:03:18.110 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:18.110 TEST_HEADER include/spdk/nvmf.h 00:03:18.110 CC test/dma/test_dma/test_dma.o 00:03:18.110 TEST_HEADER include/spdk/nvmf_spec.h 00:03:18.110 TEST_HEADER include/spdk/nvmf_transport.h 00:03:18.110 TEST_HEADER include/spdk/opal.h 00:03:18.110 TEST_HEADER include/spdk/opal_spec.h 00:03:18.110 TEST_HEADER include/spdk/pci_ids.h 00:03:18.110 TEST_HEADER include/spdk/pipe.h 00:03:18.110 TEST_HEADER include/spdk/queue.h 00:03:18.110 TEST_HEADER include/spdk/reduce.h 00:03:18.110 TEST_HEADER include/spdk/rpc.h 00:03:18.110 TEST_HEADER include/spdk/scheduler.h 00:03:18.110 TEST_HEADER include/spdk/scsi.h 00:03:18.110 TEST_HEADER include/spdk/scsi_spec.h 00:03:18.110 TEST_HEADER include/spdk/sock.h 00:03:18.110 TEST_HEADER include/spdk/stdinc.h 00:03:18.110 TEST_HEADER include/spdk/string.h 00:03:18.110 TEST_HEADER include/spdk/thread.h 00:03:18.110 TEST_HEADER include/spdk/trace.h 00:03:18.110 TEST_HEADER include/spdk/trace_parser.h 00:03:18.110 TEST_HEADER include/spdk/tree.h 00:03:18.110 TEST_HEADER include/spdk/ublk.h 00:03:18.110 CC test/env/mem_callbacks/mem_callbacks.o 00:03:18.110 TEST_HEADER include/spdk/util.h 00:03:18.110 TEST_HEADER include/spdk/uuid.h 00:03:18.110 TEST_HEADER include/spdk/version.h 00:03:18.110 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:18.110 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:18.110 TEST_HEADER include/spdk/vhost.h 00:03:18.110 TEST_HEADER include/spdk/vmd.h 00:03:18.110 TEST_HEADER include/spdk/xor.h 00:03:18.110 TEST_HEADER include/spdk/zipf.h 00:03:18.110 CXX test/cpp_headers/accel.o 00:03:18.110 LINK rpc_client_test 00:03:18.110 LINK poller_perf 00:03:18.110 LINK interrupt_tgt 00:03:18.110 LINK zipf 00:03:18.369 LINK bdev_svc 00:03:18.369 LINK ioat_perf 00:03:18.369 CXX test/cpp_headers/accel_module.o 00:03:18.369 CXX test/cpp_headers/assert.o 00:03:18.369 CXX test/cpp_headers/barrier.o 00:03:18.369 LINK spdk_trace 00:03:18.369 CC examples/ioat/verify/verify.o 00:03:18.645 CXX test/cpp_headers/base64.o 00:03:18.645 CC test/app/histogram_perf/histogram_perf.o 00:03:18.645 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:18.645 CC test/app/jsoncat/jsoncat.o 00:03:18.645 CC test/app/stub/stub.o 00:03:18.645 CC app/trace_record/trace_record.o 00:03:18.645 LINK test_dma 00:03:18.645 CXX test/cpp_headers/bdev.o 00:03:18.645 LINK histogram_perf 00:03:18.645 LINK mem_callbacks 00:03:18.645 CC examples/thread/thread/thread_ex.o 00:03:18.645 LINK verify 00:03:18.645 LINK jsoncat 00:03:18.920 LINK stub 00:03:18.920 CXX test/cpp_headers/bdev_module.o 00:03:18.920 CC test/env/vtophys/vtophys.o 00:03:18.920 LINK spdk_trace_record 00:03:18.920 CC app/spdk_lspci/spdk_lspci.o 00:03:18.920 CC app/nvmf_tgt/nvmf_main.o 00:03:18.920 LINK thread 00:03:18.920 CC app/iscsi_tgt/iscsi_tgt.o 00:03:18.920 CC app/spdk_tgt/spdk_tgt.o 00:03:18.920 CC app/spdk_nvme_perf/perf.o 00:03:19.180 LINK nvme_fuzz 00:03:19.180 CXX test/cpp_headers/bdev_zone.o 00:03:19.180 LINK vtophys 00:03:19.180 LINK spdk_lspci 00:03:19.180 LINK nvmf_tgt 00:03:19.180 LINK iscsi_tgt 00:03:19.180 CC test/event/event_perf/event_perf.o 00:03:19.180 LINK spdk_tgt 00:03:19.439 CXX test/cpp_headers/bit_array.o 00:03:19.439 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:19.439 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:19.439 CC examples/sock/hello_world/hello_sock.o 00:03:19.439 CC app/spdk_nvme_identify/identify.o 00:03:19.439 LINK event_perf 00:03:19.439 CXX test/cpp_headers/bit_pool.o 00:03:19.439 CXX test/cpp_headers/blob_bdev.o 00:03:19.439 CC app/spdk_nvme_discover/discovery_aer.o 00:03:19.439 LINK env_dpdk_post_init 00:03:19.697 CC test/nvme/aer/aer.o 00:03:19.697 CC test/event/reactor/reactor.o 00:03:19.697 LINK hello_sock 00:03:19.697 CXX test/cpp_headers/blobfs_bdev.o 00:03:19.697 CC test/event/reactor_perf/reactor_perf.o 00:03:19.697 LINK spdk_nvme_discover 00:03:19.697 CC test/env/memory/memory_ut.o 00:03:19.697 LINK reactor 00:03:19.957 LINK reactor_perf 00:03:19.957 CXX test/cpp_headers/blobfs.o 00:03:19.957 LINK aer 00:03:19.957 CC examples/vmd/lsvmd/lsvmd.o 00:03:19.957 CC examples/vmd/led/led.o 00:03:19.957 LINK spdk_nvme_perf 00:03:19.957 CC app/spdk_top/spdk_top.o 00:03:19.957 CXX test/cpp_headers/blob.o 00:03:20.216 CC test/event/app_repeat/app_repeat.o 00:03:20.216 LINK lsvmd 00:03:20.216 LINK led 00:03:20.216 CXX test/cpp_headers/conf.o 00:03:20.216 CC test/nvme/reset/reset.o 00:03:20.216 LINK app_repeat 00:03:20.216 CC test/nvme/sgl/sgl.o 00:03:20.476 CXX test/cpp_headers/config.o 00:03:20.476 CXX test/cpp_headers/cpuset.o 00:03:20.476 CXX test/cpp_headers/crc16.o 00:03:20.476 LINK spdk_nvme_identify 00:03:20.476 CC test/event/scheduler/scheduler.o 00:03:20.476 CC examples/idxd/perf/perf.o 00:03:20.476 LINK reset 00:03:20.476 LINK sgl 00:03:20.476 CXX test/cpp_headers/crc32.o 00:03:20.735 LINK scheduler 00:03:20.735 CXX test/cpp_headers/crc64.o 00:03:20.735 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:20.735 CC examples/accel/perf/accel_perf.o 00:03:20.735 CC test/nvme/e2edp/nvme_dp.o 00:03:20.735 LINK idxd_perf 00:03:20.735 CC examples/blob/hello_world/hello_blob.o 00:03:20.994 CXX test/cpp_headers/dif.o 00:03:20.994 CC test/nvme/overhead/overhead.o 00:03:20.994 LINK hello_fsdev 00:03:20.994 LINK memory_ut 00:03:20.994 LINK spdk_top 00:03:20.994 CXX test/cpp_headers/dma.o 00:03:20.994 CC test/nvme/err_injection/err_injection.o 00:03:20.994 LINK nvme_dp 00:03:20.994 LINK hello_blob 00:03:21.254 CXX test/cpp_headers/endian.o 00:03:21.254 LINK overhead 00:03:21.254 LINK err_injection 00:03:21.254 LINK accel_perf 00:03:21.254 CC app/vhost/vhost.o 00:03:21.254 CC test/env/pci/pci_ut.o 00:03:21.254 LINK iscsi_fuzz 00:03:21.254 CC app/spdk_dd/spdk_dd.o 00:03:21.254 CXX test/cpp_headers/env_dpdk.o 00:03:21.254 CC examples/blob/cli/blobcli.o 00:03:21.513 CC app/fio/nvme/fio_plugin.o 00:03:21.513 CXX test/cpp_headers/env.o 00:03:21.513 CXX test/cpp_headers/event.o 00:03:21.513 LINK vhost 00:03:21.513 CC test/nvme/startup/startup.o 00:03:21.513 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:21.513 CXX test/cpp_headers/fd_group.o 00:03:21.772 CC examples/nvme/hello_world/hello_world.o 00:03:21.772 CC examples/nvme/reconnect/reconnect.o 00:03:21.772 LINK startup 00:03:21.772 LINK spdk_dd 00:03:21.772 LINK pci_ut 00:03:21.772 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:21.772 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:21.772 CXX test/cpp_headers/fd.o 00:03:22.031 LINK hello_world 00:03:22.031 LINK blobcli 00:03:22.031 CXX test/cpp_headers/file.o 00:03:22.031 CC test/nvme/reserve/reserve.o 00:03:22.031 LINK spdk_nvme 00:03:22.031 CC app/fio/bdev/fio_plugin.o 00:03:22.031 LINK reconnect 00:03:22.031 CXX test/cpp_headers/fsdev.o 00:03:22.291 CC examples/nvme/arbitration/arbitration.o 00:03:22.291 CC examples/bdev/hello_world/hello_bdev.o 00:03:22.291 LINK reserve 00:03:22.291 CC examples/nvme/hotplug/hotplug.o 00:03:22.291 LINK vhost_fuzz 00:03:22.292 CC test/nvme/simple_copy/simple_copy.o 00:03:22.292 CXX test/cpp_headers/fsdev_module.o 00:03:22.292 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:22.292 LINK nvme_manage 00:03:22.551 CXX test/cpp_headers/ftl.o 00:03:22.551 LINK hello_bdev 00:03:22.551 CC examples/nvme/abort/abort.o 00:03:22.551 LINK hotplug 00:03:22.551 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:22.551 LINK simple_copy 00:03:22.551 LINK cmb_copy 00:03:22.551 LINK arbitration 00:03:22.551 LINK spdk_bdev 00:03:22.551 CC examples/bdev/bdevperf/bdevperf.o 00:03:22.551 CXX test/cpp_headers/fuse_dispatcher.o 00:03:22.551 LINK pmr_persistence 00:03:22.810 CXX test/cpp_headers/gpt_spec.o 00:03:22.810 CC test/nvme/connect_stress/connect_stress.o 00:03:22.810 CXX test/cpp_headers/hexlify.o 00:03:22.810 CC test/nvme/boot_partition/boot_partition.o 00:03:22.810 CC test/accel/dif/dif.o 00:03:22.810 CC test/blobfs/mkfs/mkfs.o 00:03:22.810 LINK abort 00:03:22.810 CXX test/cpp_headers/histogram_data.o 00:03:22.810 CC test/lvol/esnap/esnap.o 00:03:22.810 CC test/nvme/compliance/nvme_compliance.o 00:03:23.070 LINK connect_stress 00:03:23.070 LINK boot_partition 00:03:23.070 CC test/nvme/fused_ordering/fused_ordering.o 00:03:23.070 CXX test/cpp_headers/idxd.o 00:03:23.070 LINK mkfs 00:03:23.070 CXX test/cpp_headers/idxd_spec.o 00:03:23.070 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:23.330 CXX test/cpp_headers/init.o 00:03:23.330 LINK fused_ordering 00:03:23.330 CXX test/cpp_headers/ioat.o 00:03:23.330 CC test/nvme/fdp/fdp.o 00:03:23.330 CC test/nvme/cuse/cuse.o 00:03:23.330 LINK nvme_compliance 00:03:23.330 CXX test/cpp_headers/ioat_spec.o 00:03:23.330 LINK doorbell_aers 00:03:23.330 CXX test/cpp_headers/iscsi_spec.o 00:03:23.330 CXX test/cpp_headers/json.o 00:03:23.599 CXX test/cpp_headers/jsonrpc.o 00:03:23.599 CXX test/cpp_headers/keyring.o 00:03:23.599 LINK bdevperf 00:03:23.599 CXX test/cpp_headers/keyring_module.o 00:03:23.599 CXX test/cpp_headers/likely.o 00:03:23.599 CXX test/cpp_headers/log.o 00:03:23.599 LINK dif 00:03:23.599 LINK fdp 00:03:23.599 CXX test/cpp_headers/lvol.o 00:03:23.599 CXX test/cpp_headers/md5.o 00:03:23.599 CXX test/cpp_headers/memory.o 00:03:23.859 CXX test/cpp_headers/mmio.o 00:03:23.859 CXX test/cpp_headers/nbd.o 00:03:23.859 CXX test/cpp_headers/net.o 00:03:23.859 CXX test/cpp_headers/notify.o 00:03:23.859 CXX test/cpp_headers/nvme.o 00:03:23.859 CXX test/cpp_headers/nvme_intel.o 00:03:23.859 CXX test/cpp_headers/nvme_ocssd.o 00:03:23.859 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:23.859 CXX test/cpp_headers/nvme_spec.o 00:03:24.119 CXX test/cpp_headers/nvme_zns.o 00:03:24.119 CC examples/nvmf/nvmf/nvmf.o 00:03:24.119 CXX test/cpp_headers/nvmf_cmd.o 00:03:24.119 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:24.119 CXX test/cpp_headers/nvmf.o 00:03:24.119 CXX test/cpp_headers/nvmf_spec.o 00:03:24.119 CC test/bdev/bdevio/bdevio.o 00:03:24.119 CXX test/cpp_headers/nvmf_transport.o 00:03:24.119 CXX test/cpp_headers/opal.o 00:03:24.119 CXX test/cpp_headers/opal_spec.o 00:03:24.379 CXX test/cpp_headers/pci_ids.o 00:03:24.379 CXX test/cpp_headers/pipe.o 00:03:24.379 CXX test/cpp_headers/queue.o 00:03:24.379 LINK nvmf 00:03:24.379 CXX test/cpp_headers/reduce.o 00:03:24.379 CXX test/cpp_headers/rpc.o 00:03:24.379 CXX test/cpp_headers/scheduler.o 00:03:24.379 CXX test/cpp_headers/scsi.o 00:03:24.379 CXX test/cpp_headers/scsi_spec.o 00:03:24.379 CXX test/cpp_headers/sock.o 00:03:24.379 CXX test/cpp_headers/stdinc.o 00:03:24.379 CXX test/cpp_headers/string.o 00:03:24.640 CXX test/cpp_headers/thread.o 00:03:24.640 CXX test/cpp_headers/trace.o 00:03:24.640 LINK bdevio 00:03:24.640 CXX test/cpp_headers/trace_parser.o 00:03:24.640 CXX test/cpp_headers/tree.o 00:03:24.640 CXX test/cpp_headers/ublk.o 00:03:24.640 CXX test/cpp_headers/util.o 00:03:24.640 CXX test/cpp_headers/uuid.o 00:03:24.640 CXX test/cpp_headers/version.o 00:03:24.640 CXX test/cpp_headers/vfio_user_pci.o 00:03:24.640 CXX test/cpp_headers/vfio_user_spec.o 00:03:24.640 CXX test/cpp_headers/vhost.o 00:03:24.640 LINK cuse 00:03:24.640 CXX test/cpp_headers/vmd.o 00:03:24.640 CXX test/cpp_headers/xor.o 00:03:24.900 CXX test/cpp_headers/zipf.o 00:03:29.105 LINK esnap 00:03:29.105 00:03:29.105 real 1m24.229s 00:03:29.105 user 7m30.171s 00:03:29.105 sys 1m29.796s 00:03:29.105 10:14:42 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:29.105 10:14:42 make -- common/autotest_common.sh@10 -- $ set +x 00:03:29.105 ************************************ 00:03:29.105 END TEST make 00:03:29.105 ************************************ 00:03:29.105 10:14:42 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:29.105 10:14:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:29.105 10:14:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:29.105 10:14:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.105 10:14:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:29.105 10:14:42 -- pm/common@44 -- $ pid=5465 00:03:29.105 10:14:42 -- pm/common@50 -- $ kill -TERM 5465 00:03:29.105 10:14:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.105 10:14:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:29.105 10:14:42 -- pm/common@44 -- $ pid=5467 00:03:29.105 10:14:42 -- pm/common@50 -- $ kill -TERM 5467 00:03:29.105 10:14:42 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:29.105 10:14:42 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:29.373 10:14:42 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:29.373 10:14:42 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:29.373 10:14:42 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:29.373 10:14:42 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:29.373 10:14:43 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:29.373 10:14:43 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:29.373 10:14:43 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:29.373 10:14:43 -- scripts/common.sh@336 -- # IFS=.-: 00:03:29.373 10:14:43 -- scripts/common.sh@336 -- # read -ra ver1 00:03:29.373 10:14:43 -- scripts/common.sh@337 -- # IFS=.-: 00:03:29.373 10:14:43 -- scripts/common.sh@337 -- # read -ra ver2 00:03:29.373 10:14:43 -- scripts/common.sh@338 -- # local 'op=<' 00:03:29.373 10:14:43 -- scripts/common.sh@340 -- # ver1_l=2 00:03:29.373 10:14:43 -- scripts/common.sh@341 -- # ver2_l=1 00:03:29.373 10:14:43 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:29.373 10:14:43 -- scripts/common.sh@344 -- # case "$op" in 00:03:29.373 10:14:43 -- scripts/common.sh@345 -- # : 1 00:03:29.373 10:14:43 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:29.373 10:14:43 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:29.373 10:14:43 -- scripts/common.sh@365 -- # decimal 1 00:03:29.373 10:14:43 -- scripts/common.sh@353 -- # local d=1 00:03:29.373 10:14:43 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:29.373 10:14:43 -- scripts/common.sh@355 -- # echo 1 00:03:29.373 10:14:43 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:29.373 10:14:43 -- scripts/common.sh@366 -- # decimal 2 00:03:29.373 10:14:43 -- scripts/common.sh@353 -- # local d=2 00:03:29.373 10:14:43 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:29.373 10:14:43 -- scripts/common.sh@355 -- # echo 2 00:03:29.373 10:14:43 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:29.373 10:14:43 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:29.373 10:14:43 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:29.373 10:14:43 -- scripts/common.sh@368 -- # return 0 00:03:29.373 10:14:43 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:29.373 10:14:43 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:29.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.373 --rc genhtml_branch_coverage=1 00:03:29.373 --rc genhtml_function_coverage=1 00:03:29.373 --rc genhtml_legend=1 00:03:29.374 --rc geninfo_all_blocks=1 00:03:29.374 --rc geninfo_unexecuted_blocks=1 00:03:29.374 00:03:29.374 ' 00:03:29.374 10:14:43 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:29.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.374 --rc genhtml_branch_coverage=1 00:03:29.374 --rc genhtml_function_coverage=1 00:03:29.374 --rc genhtml_legend=1 00:03:29.374 --rc geninfo_all_blocks=1 00:03:29.374 --rc geninfo_unexecuted_blocks=1 00:03:29.374 00:03:29.374 ' 00:03:29.374 10:14:43 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:29.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.374 --rc genhtml_branch_coverage=1 00:03:29.374 --rc genhtml_function_coverage=1 00:03:29.374 --rc genhtml_legend=1 00:03:29.374 --rc geninfo_all_blocks=1 00:03:29.374 --rc geninfo_unexecuted_blocks=1 00:03:29.374 00:03:29.374 ' 00:03:29.374 10:14:43 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:29.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:29.374 --rc genhtml_branch_coverage=1 00:03:29.374 --rc genhtml_function_coverage=1 00:03:29.374 --rc genhtml_legend=1 00:03:29.374 --rc geninfo_all_blocks=1 00:03:29.374 --rc geninfo_unexecuted_blocks=1 00:03:29.374 00:03:29.374 ' 00:03:29.374 10:14:43 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:29.374 10:14:43 -- nvmf/common.sh@7 -- # uname -s 00:03:29.374 10:14:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:29.374 10:14:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:29.374 10:14:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:29.374 10:14:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:29.374 10:14:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:29.374 10:14:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:29.374 10:14:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:29.374 10:14:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:29.374 10:14:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:29.374 10:14:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:29.374 10:14:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9ccf630-8d77-473d-8904-7d75d98bdf9d 00:03:29.374 10:14:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=f9ccf630-8d77-473d-8904-7d75d98bdf9d 00:03:29.374 10:14:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:29.374 10:14:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:29.374 10:14:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:29.374 10:14:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:29.374 10:14:43 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:29.374 10:14:43 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:29.374 10:14:43 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:29.374 10:14:43 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:29.374 10:14:43 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:29.374 10:14:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:29.374 10:14:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:29.374 10:14:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:29.374 10:14:43 -- paths/export.sh@5 -- # export PATH 00:03:29.374 10:14:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:29.374 10:14:43 -- nvmf/common.sh@51 -- # : 0 00:03:29.374 10:14:43 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:29.374 10:14:43 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:29.374 10:14:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:29.374 10:14:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:29.374 10:14:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:29.374 10:14:43 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:29.374 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:29.374 10:14:43 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:29.374 10:14:43 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:29.374 10:14:43 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:29.374 10:14:43 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:29.374 10:14:43 -- spdk/autotest.sh@32 -- # uname -s 00:03:29.374 10:14:43 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:29.374 10:14:43 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:29.374 10:14:43 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:29.374 10:14:43 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:29.374 10:14:43 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:29.374 10:14:43 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:29.374 10:14:43 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:29.374 10:14:43 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:29.374 10:14:43 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:29.374 10:14:43 -- spdk/autotest.sh@48 -- # udevadm_pid=54438 00:03:29.374 10:14:43 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:29.374 10:14:43 -- pm/common@17 -- # local monitor 00:03:29.374 10:14:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.374 10:14:43 -- pm/common@21 -- # date +%s 00:03:29.374 10:14:43 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:29.374 10:14:43 -- pm/common@25 -- # sleep 1 00:03:29.374 10:14:43 -- pm/common@21 -- # date +%s 00:03:29.374 10:14:43 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732011283 00:03:29.634 10:14:43 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732011283 00:03:29.634 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732011283_collect-vmstat.pm.log 00:03:29.634 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732011283_collect-cpu-load.pm.log 00:03:30.574 10:14:44 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:30.574 10:14:44 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:30.574 10:14:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:30.574 10:14:44 -- common/autotest_common.sh@10 -- # set +x 00:03:30.574 10:14:44 -- spdk/autotest.sh@59 -- # create_test_list 00:03:30.574 10:14:44 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:30.574 10:14:44 -- common/autotest_common.sh@10 -- # set +x 00:03:30.574 10:14:44 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:30.574 10:14:44 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:30.574 10:14:44 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:30.574 10:14:44 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:30.574 10:14:44 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:30.574 10:14:44 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:30.574 10:14:44 -- common/autotest_common.sh@1457 -- # uname 00:03:30.574 10:14:44 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:30.574 10:14:44 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:30.574 10:14:44 -- common/autotest_common.sh@1477 -- # uname 00:03:30.574 10:14:44 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:30.574 10:14:44 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:30.574 10:14:44 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:30.574 lcov: LCOV version 1.15 00:03:30.574 10:14:44 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:45.464 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:45.464 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:00.351 10:15:12 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:00.351 10:15:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:00.351 10:15:12 -- common/autotest_common.sh@10 -- # set +x 00:04:00.351 10:15:12 -- spdk/autotest.sh@78 -- # rm -f 00:04:00.351 10:15:12 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:00.351 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.351 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:00.351 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:00.351 10:15:13 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:00.351 10:15:13 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:00.351 10:15:13 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:00.351 10:15:13 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:00.351 10:15:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:00.351 10:15:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:00.351 10:15:13 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:00.351 10:15:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:00.351 10:15:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:00.351 10:15:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:00.351 10:15:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:04:00.351 10:15:13 -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:04:00.351 10:15:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:04:00.351 10:15:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:00.351 10:15:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:00.351 10:15:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:04:00.351 10:15:13 -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:04:00.351 10:15:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:04:00.351 10:15:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:00.351 10:15:13 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:00.351 10:15:13 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:00.351 10:15:13 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:00.351 10:15:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:00.351 10:15:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:00.351 10:15:13 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:00.351 10:15:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.351 10:15:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:00.351 10:15:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:00.351 10:15:13 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:00.351 10:15:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:00.351 No valid GPT data, bailing 00:04:00.351 10:15:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:00.351 10:15:13 -- scripts/common.sh@394 -- # pt= 00:04:00.351 10:15:13 -- scripts/common.sh@395 -- # return 1 00:04:00.351 10:15:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:00.351 1+0 records in 00:04:00.351 1+0 records out 00:04:00.351 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00639315 s, 164 MB/s 00:04:00.351 10:15:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.351 10:15:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:00.351 10:15:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n2 00:04:00.351 10:15:13 -- scripts/common.sh@381 -- # local block=/dev/nvme0n2 pt 00:04:00.351 10:15:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n2 00:04:00.351 No valid GPT data, bailing 00:04:00.351 10:15:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:04:00.351 10:15:13 -- scripts/common.sh@394 -- # pt= 00:04:00.351 10:15:13 -- scripts/common.sh@395 -- # return 1 00:04:00.351 10:15:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n2 bs=1M count=1 00:04:00.351 1+0 records in 00:04:00.351 1+0 records out 00:04:00.351 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00539373 s, 194 MB/s 00:04:00.351 10:15:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.351 10:15:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:00.351 10:15:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n3 00:04:00.351 10:15:13 -- scripts/common.sh@381 -- # local block=/dev/nvme0n3 pt 00:04:00.351 10:15:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n3 00:04:00.351 No valid GPT data, bailing 00:04:00.351 10:15:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:04:00.351 10:15:13 -- scripts/common.sh@394 -- # pt= 00:04:00.351 10:15:13 -- scripts/common.sh@395 -- # return 1 00:04:00.351 10:15:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n3 bs=1M count=1 00:04:00.351 1+0 records in 00:04:00.351 1+0 records out 00:04:00.351 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00548566 s, 191 MB/s 00:04:00.351 10:15:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.351 10:15:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:00.351 10:15:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:00.351 10:15:13 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:00.351 10:15:13 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:00.351 No valid GPT data, bailing 00:04:00.351 10:15:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:00.351 10:15:13 -- scripts/common.sh@394 -- # pt= 00:04:00.351 10:15:13 -- scripts/common.sh@395 -- # return 1 00:04:00.351 10:15:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:00.351 1+0 records in 00:04:00.351 1+0 records out 00:04:00.351 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00596057 s, 176 MB/s 00:04:00.351 10:15:13 -- spdk/autotest.sh@105 -- # sync 00:04:00.351 10:15:13 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:00.351 10:15:13 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:00.351 10:15:13 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:02.888 10:15:16 -- spdk/autotest.sh@111 -- # uname -s 00:04:02.888 10:15:16 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:02.888 10:15:16 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:02.888 10:15:16 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:03.455 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:03.455 Hugepages 00:04:03.455 node hugesize free / total 00:04:03.455 node0 1048576kB 0 / 0 00:04:03.455 node0 2048kB 0 / 0 00:04:03.455 00:04:03.455 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:03.716 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:03.716 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:03.716 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:04:03.716 10:15:17 -- spdk/autotest.sh@117 -- # uname -s 00:04:03.716 10:15:17 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:03.716 10:15:17 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:03.716 10:15:17 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:04.657 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.657 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:04.657 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:04.917 10:15:18 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:05.861 10:15:19 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:05.861 10:15:19 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:05.861 10:15:19 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:05.861 10:15:19 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:05.861 10:15:19 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:05.861 10:15:19 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:05.861 10:15:19 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:05.861 10:15:19 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:05.861 10:15:19 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:05.861 10:15:19 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:05.861 10:15:19 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:05.861 10:15:19 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:06.430 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:06.430 Waiting for block devices as requested 00:04:06.430 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:06.430 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:06.690 10:15:20 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:06.690 10:15:20 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:06.690 10:15:20 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:06.690 10:15:20 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:06.690 10:15:20 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:06.690 10:15:20 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:06.690 10:15:20 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:06.690 10:15:20 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:06.690 10:15:20 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:06.690 10:15:20 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:06.690 10:15:20 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:06.690 10:15:20 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:06.690 10:15:20 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:06.690 10:15:20 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:06.690 10:15:20 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:06.690 10:15:20 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:06.690 10:15:20 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:06.690 10:15:20 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:06.690 10:15:20 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:06.690 10:15:20 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:06.690 10:15:20 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:06.690 10:15:20 -- common/autotest_common.sh@1543 -- # continue 00:04:06.690 10:15:20 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:06.690 10:15:20 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:06.690 10:15:20 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:06.690 10:15:20 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:06.691 10:15:20 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:06.691 10:15:20 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:06.691 10:15:20 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:06.691 10:15:20 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:06.691 10:15:20 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:06.691 10:15:20 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:06.691 10:15:20 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:06.691 10:15:20 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:06.691 10:15:20 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:06.691 10:15:20 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:06.691 10:15:20 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:06.691 10:15:20 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:06.691 10:15:20 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:06.691 10:15:20 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:06.691 10:15:20 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:06.691 10:15:20 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:06.691 10:15:20 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:06.691 10:15:20 -- common/autotest_common.sh@1543 -- # continue 00:04:06.691 10:15:20 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:06.691 10:15:20 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:06.691 10:15:20 -- common/autotest_common.sh@10 -- # set +x 00:04:06.691 10:15:20 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:06.691 10:15:20 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:06.691 10:15:20 -- common/autotest_common.sh@10 -- # set +x 00:04:06.691 10:15:20 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:07.642 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:07.642 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:07.642 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:07.902 10:15:21 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:07.902 10:15:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:07.902 10:15:21 -- common/autotest_common.sh@10 -- # set +x 00:04:07.902 10:15:21 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:07.902 10:15:21 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:07.902 10:15:21 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:07.902 10:15:21 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:07.902 10:15:21 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:07.902 10:15:21 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:07.903 10:15:21 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:07.903 10:15:21 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:07.903 10:15:21 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:07.903 10:15:21 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:07.903 10:15:21 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:07.903 10:15:21 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:07.903 10:15:21 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:07.903 10:15:21 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:07.903 10:15:21 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:07.903 10:15:21 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:07.903 10:15:21 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:07.903 10:15:21 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:07.903 10:15:21 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:07.903 10:15:21 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:07.903 10:15:21 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:07.903 10:15:21 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:07.903 10:15:21 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:07.903 10:15:21 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:07.903 10:15:21 -- common/autotest_common.sh@1572 -- # return 0 00:04:07.903 10:15:21 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:07.903 10:15:21 -- common/autotest_common.sh@1580 -- # return 0 00:04:07.903 10:15:21 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:07.903 10:15:21 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:07.903 10:15:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:07.903 10:15:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:07.903 10:15:21 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:07.903 10:15:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:07.903 10:15:21 -- common/autotest_common.sh@10 -- # set +x 00:04:07.903 10:15:21 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:07.903 10:15:21 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:07.903 10:15:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.903 10:15:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.903 10:15:21 -- common/autotest_common.sh@10 -- # set +x 00:04:07.903 ************************************ 00:04:07.903 START TEST env 00:04:07.903 ************************************ 00:04:07.903 10:15:21 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:08.163 * Looking for test storage... 00:04:08.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:08.163 10:15:21 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:08.163 10:15:21 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:08.163 10:15:21 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:08.163 10:15:21 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:08.163 10:15:21 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.163 10:15:21 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.163 10:15:21 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.163 10:15:21 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.163 10:15:21 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.163 10:15:21 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.163 10:15:21 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.163 10:15:21 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.163 10:15:21 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.163 10:15:21 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.163 10:15:21 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.163 10:15:21 env -- scripts/common.sh@344 -- # case "$op" in 00:04:08.163 10:15:21 env -- scripts/common.sh@345 -- # : 1 00:04:08.163 10:15:21 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.163 10:15:21 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.163 10:15:21 env -- scripts/common.sh@365 -- # decimal 1 00:04:08.163 10:15:21 env -- scripts/common.sh@353 -- # local d=1 00:04:08.163 10:15:21 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.163 10:15:21 env -- scripts/common.sh@355 -- # echo 1 00:04:08.163 10:15:21 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.163 10:15:21 env -- scripts/common.sh@366 -- # decimal 2 00:04:08.163 10:15:21 env -- scripts/common.sh@353 -- # local d=2 00:04:08.163 10:15:21 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.163 10:15:21 env -- scripts/common.sh@355 -- # echo 2 00:04:08.163 10:15:21 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.163 10:15:21 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.163 10:15:21 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.163 10:15:21 env -- scripts/common.sh@368 -- # return 0 00:04:08.163 10:15:21 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.163 10:15:21 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:08.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.163 --rc genhtml_branch_coverage=1 00:04:08.163 --rc genhtml_function_coverage=1 00:04:08.163 --rc genhtml_legend=1 00:04:08.163 --rc geninfo_all_blocks=1 00:04:08.163 --rc geninfo_unexecuted_blocks=1 00:04:08.163 00:04:08.163 ' 00:04:08.163 10:15:21 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:08.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.163 --rc genhtml_branch_coverage=1 00:04:08.163 --rc genhtml_function_coverage=1 00:04:08.163 --rc genhtml_legend=1 00:04:08.163 --rc geninfo_all_blocks=1 00:04:08.163 --rc geninfo_unexecuted_blocks=1 00:04:08.163 00:04:08.163 ' 00:04:08.163 10:15:21 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:08.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.163 --rc genhtml_branch_coverage=1 00:04:08.163 --rc genhtml_function_coverage=1 00:04:08.163 --rc genhtml_legend=1 00:04:08.163 --rc geninfo_all_blocks=1 00:04:08.163 --rc geninfo_unexecuted_blocks=1 00:04:08.163 00:04:08.163 ' 00:04:08.163 10:15:21 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:08.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.163 --rc genhtml_branch_coverage=1 00:04:08.163 --rc genhtml_function_coverage=1 00:04:08.163 --rc genhtml_legend=1 00:04:08.163 --rc geninfo_all_blocks=1 00:04:08.163 --rc geninfo_unexecuted_blocks=1 00:04:08.163 00:04:08.163 ' 00:04:08.163 10:15:21 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:08.163 10:15:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.163 10:15:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.163 10:15:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.163 ************************************ 00:04:08.163 START TEST env_memory 00:04:08.163 ************************************ 00:04:08.163 10:15:21 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:08.163 00:04:08.163 00:04:08.163 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.163 http://cunit.sourceforge.net/ 00:04:08.163 00:04:08.163 00:04:08.163 Suite: memory 00:04:08.163 Test: alloc and free memory map ...[2024-11-19 10:15:21.938199] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:08.424 passed 00:04:08.424 Test: mem map translation ...[2024-11-19 10:15:21.981138] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:08.424 [2024-11-19 10:15:21.981204] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:08.424 [2024-11-19 10:15:21.981284] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:08.424 [2024-11-19 10:15:21.981305] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:08.424 passed 00:04:08.424 Test: mem map registration ...[2024-11-19 10:15:22.047309] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:08.424 [2024-11-19 10:15:22.047375] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:08.424 passed 00:04:08.424 Test: mem map adjacent registrations ...passed 00:04:08.424 00:04:08.424 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.424 suites 1 1 n/a 0 0 00:04:08.424 tests 4 4 4 0 0 00:04:08.424 asserts 152 152 152 0 n/a 00:04:08.424 00:04:08.424 Elapsed time = 0.235 seconds 00:04:08.424 00:04:08.424 real 0m0.288s 00:04:08.424 user 0m0.248s 00:04:08.424 sys 0m0.029s 00:04:08.424 10:15:22 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.424 10:15:22 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:08.424 ************************************ 00:04:08.424 END TEST env_memory 00:04:08.424 ************************************ 00:04:08.684 10:15:22 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:08.684 10:15:22 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.684 10:15:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.684 10:15:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.684 ************************************ 00:04:08.684 START TEST env_vtophys 00:04:08.684 ************************************ 00:04:08.684 10:15:22 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:08.684 EAL: lib.eal log level changed from notice to debug 00:04:08.684 EAL: Detected lcore 0 as core 0 on socket 0 00:04:08.684 EAL: Detected lcore 1 as core 0 on socket 0 00:04:08.684 EAL: Detected lcore 2 as core 0 on socket 0 00:04:08.684 EAL: Detected lcore 3 as core 0 on socket 0 00:04:08.684 EAL: Detected lcore 4 as core 0 on socket 0 00:04:08.684 EAL: Detected lcore 5 as core 0 on socket 0 00:04:08.684 EAL: Detected lcore 6 as core 0 on socket 0 00:04:08.684 EAL: Detected lcore 7 as core 0 on socket 0 00:04:08.684 EAL: Detected lcore 8 as core 0 on socket 0 00:04:08.684 EAL: Detected lcore 9 as core 0 on socket 0 00:04:08.684 EAL: Maximum logical cores by configuration: 128 00:04:08.684 EAL: Detected CPU lcores: 10 00:04:08.684 EAL: Detected NUMA nodes: 1 00:04:08.684 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:08.684 EAL: Detected shared linkage of DPDK 00:04:08.684 EAL: No shared files mode enabled, IPC will be disabled 00:04:08.684 EAL: Selected IOVA mode 'PA' 00:04:08.684 EAL: Probing VFIO support... 00:04:08.684 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:08.684 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:08.684 EAL: Ask a virtual area of 0x2e000 bytes 00:04:08.684 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:08.684 EAL: Setting up physically contiguous memory... 00:04:08.684 EAL: Setting maximum number of open files to 524288 00:04:08.684 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:08.684 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:08.684 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.684 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:08.684 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.684 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.684 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:08.684 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:08.684 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.684 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:08.684 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.684 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.684 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:08.684 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:08.684 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.684 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:08.684 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.684 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.684 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:08.684 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:08.684 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.684 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:08.684 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.684 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.684 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:08.684 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:08.684 EAL: Hugepages will be freed exactly as allocated. 00:04:08.684 EAL: No shared files mode enabled, IPC is disabled 00:04:08.684 EAL: No shared files mode enabled, IPC is disabled 00:04:08.684 EAL: TSC frequency is ~2290000 KHz 00:04:08.684 EAL: Main lcore 0 is ready (tid=7fbad54c7a40;cpuset=[0]) 00:04:08.684 EAL: Trying to obtain current memory policy. 00:04:08.684 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.684 EAL: Restoring previous memory policy: 0 00:04:08.684 EAL: request: mp_malloc_sync 00:04:08.684 EAL: No shared files mode enabled, IPC is disabled 00:04:08.685 EAL: Heap on socket 0 was expanded by 2MB 00:04:08.685 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:08.685 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:08.685 EAL: Mem event callback 'spdk:(nil)' registered 00:04:08.685 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:08.685 00:04:08.685 00:04:08.685 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.685 http://cunit.sourceforge.net/ 00:04:08.685 00:04:08.685 00:04:08.685 Suite: components_suite 00:04:09.252 Test: vtophys_malloc_test ...passed 00:04:09.252 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:09.252 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.252 EAL: Restoring previous memory policy: 4 00:04:09.252 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.252 EAL: request: mp_malloc_sync 00:04:09.252 EAL: No shared files mode enabled, IPC is disabled 00:04:09.252 EAL: Heap on socket 0 was expanded by 4MB 00:04:09.252 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.252 EAL: request: mp_malloc_sync 00:04:09.252 EAL: No shared files mode enabled, IPC is disabled 00:04:09.252 EAL: Heap on socket 0 was shrunk by 4MB 00:04:09.252 EAL: Trying to obtain current memory policy. 00:04:09.252 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.252 EAL: Restoring previous memory policy: 4 00:04:09.252 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.252 EAL: request: mp_malloc_sync 00:04:09.252 EAL: No shared files mode enabled, IPC is disabled 00:04:09.252 EAL: Heap on socket 0 was expanded by 6MB 00:04:09.252 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.252 EAL: request: mp_malloc_sync 00:04:09.252 EAL: No shared files mode enabled, IPC is disabled 00:04:09.252 EAL: Heap on socket 0 was shrunk by 6MB 00:04:09.252 EAL: Trying to obtain current memory policy. 00:04:09.252 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.252 EAL: Restoring previous memory policy: 4 00:04:09.252 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.252 EAL: request: mp_malloc_sync 00:04:09.252 EAL: No shared files mode enabled, IPC is disabled 00:04:09.252 EAL: Heap on socket 0 was expanded by 10MB 00:04:09.252 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.252 EAL: request: mp_malloc_sync 00:04:09.252 EAL: No shared files mode enabled, IPC is disabled 00:04:09.252 EAL: Heap on socket 0 was shrunk by 10MB 00:04:09.252 EAL: Trying to obtain current memory policy. 00:04:09.252 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.252 EAL: Restoring previous memory policy: 4 00:04:09.252 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.252 EAL: request: mp_malloc_sync 00:04:09.252 EAL: No shared files mode enabled, IPC is disabled 00:04:09.252 EAL: Heap on socket 0 was expanded by 18MB 00:04:09.252 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.252 EAL: request: mp_malloc_sync 00:04:09.252 EAL: No shared files mode enabled, IPC is disabled 00:04:09.252 EAL: Heap on socket 0 was shrunk by 18MB 00:04:09.252 EAL: Trying to obtain current memory policy. 00:04:09.252 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.252 EAL: Restoring previous memory policy: 4 00:04:09.252 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.252 EAL: request: mp_malloc_sync 00:04:09.252 EAL: No shared files mode enabled, IPC is disabled 00:04:09.252 EAL: Heap on socket 0 was expanded by 34MB 00:04:09.252 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.252 EAL: request: mp_malloc_sync 00:04:09.252 EAL: No shared files mode enabled, IPC is disabled 00:04:09.252 EAL: Heap on socket 0 was shrunk by 34MB 00:04:09.512 EAL: Trying to obtain current memory policy. 00:04:09.512 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.512 EAL: Restoring previous memory policy: 4 00:04:09.512 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.512 EAL: request: mp_malloc_sync 00:04:09.512 EAL: No shared files mode enabled, IPC is disabled 00:04:09.512 EAL: Heap on socket 0 was expanded by 66MB 00:04:09.512 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.512 EAL: request: mp_malloc_sync 00:04:09.512 EAL: No shared files mode enabled, IPC is disabled 00:04:09.512 EAL: Heap on socket 0 was shrunk by 66MB 00:04:09.512 EAL: Trying to obtain current memory policy. 00:04:09.512 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.771 EAL: Restoring previous memory policy: 4 00:04:09.771 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.771 EAL: request: mp_malloc_sync 00:04:09.771 EAL: No shared files mode enabled, IPC is disabled 00:04:09.771 EAL: Heap on socket 0 was expanded by 130MB 00:04:09.771 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.771 EAL: request: mp_malloc_sync 00:04:09.771 EAL: No shared files mode enabled, IPC is disabled 00:04:09.771 EAL: Heap on socket 0 was shrunk by 130MB 00:04:10.030 EAL: Trying to obtain current memory policy. 00:04:10.030 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.030 EAL: Restoring previous memory policy: 4 00:04:10.030 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.030 EAL: request: mp_malloc_sync 00:04:10.030 EAL: No shared files mode enabled, IPC is disabled 00:04:10.030 EAL: Heap on socket 0 was expanded by 258MB 00:04:10.598 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.598 EAL: request: mp_malloc_sync 00:04:10.598 EAL: No shared files mode enabled, IPC is disabled 00:04:10.598 EAL: Heap on socket 0 was shrunk by 258MB 00:04:11.166 EAL: Trying to obtain current memory policy. 00:04:11.166 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.166 EAL: Restoring previous memory policy: 4 00:04:11.166 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.166 EAL: request: mp_malloc_sync 00:04:11.166 EAL: No shared files mode enabled, IPC is disabled 00:04:11.166 EAL: Heap on socket 0 was expanded by 514MB 00:04:12.107 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.107 EAL: request: mp_malloc_sync 00:04:12.107 EAL: No shared files mode enabled, IPC is disabled 00:04:12.107 EAL: Heap on socket 0 was shrunk by 514MB 00:04:13.044 EAL: Trying to obtain current memory policy. 00:04:13.044 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:13.303 EAL: Restoring previous memory policy: 4 00:04:13.303 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.303 EAL: request: mp_malloc_sync 00:04:13.303 EAL: No shared files mode enabled, IPC is disabled 00:04:13.303 EAL: Heap on socket 0 was expanded by 1026MB 00:04:15.210 EAL: Calling mem event callback 'spdk:(nil)' 00:04:15.210 EAL: request: mp_malloc_sync 00:04:15.210 EAL: No shared files mode enabled, IPC is disabled 00:04:15.210 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:17.118 passed 00:04:17.118 00:04:17.118 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.118 suites 1 1 n/a 0 0 00:04:17.118 tests 2 2 2 0 0 00:04:17.118 asserts 5677 5677 5677 0 n/a 00:04:17.118 00:04:17.118 Elapsed time = 7.941 seconds 00:04:17.118 EAL: Calling mem event callback 'spdk:(nil)' 00:04:17.118 EAL: request: mp_malloc_sync 00:04:17.118 EAL: No shared files mode enabled, IPC is disabled 00:04:17.118 EAL: Heap on socket 0 was shrunk by 2MB 00:04:17.118 EAL: No shared files mode enabled, IPC is disabled 00:04:17.118 EAL: No shared files mode enabled, IPC is disabled 00:04:17.118 EAL: No shared files mode enabled, IPC is disabled 00:04:17.118 00:04:17.118 real 0m8.267s 00:04:17.118 user 0m7.302s 00:04:17.118 sys 0m0.810s 00:04:17.118 10:15:30 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.118 10:15:30 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:17.118 ************************************ 00:04:17.118 END TEST env_vtophys 00:04:17.118 ************************************ 00:04:17.118 10:15:30 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:17.118 10:15:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.118 10:15:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.118 10:15:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.118 ************************************ 00:04:17.118 START TEST env_pci 00:04:17.118 ************************************ 00:04:17.118 10:15:30 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:17.118 00:04:17.118 00:04:17.118 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.118 http://cunit.sourceforge.net/ 00:04:17.118 00:04:17.118 00:04:17.118 Suite: pci 00:04:17.118 Test: pci_hook ...[2024-11-19 10:15:30.591249] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56734 has claimed it 00:04:17.118 passed 00:04:17.118 00:04:17.118 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.118 suites 1 1 n/a 0 0 00:04:17.118 tests 1 1 1 0 0 00:04:17.118 asserts 25 25 25 0 n/a 00:04:17.118 00:04:17.118 Elapsed time = 0.005 seconds 00:04:17.118 EAL: Cannot find device (10000:00:01.0) 00:04:17.118 EAL: Failed to attach device on primary process 00:04:17.118 00:04:17.118 real 0m0.104s 00:04:17.118 user 0m0.050s 00:04:17.118 sys 0m0.053s 00:04:17.118 10:15:30 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.118 10:15:30 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:17.118 ************************************ 00:04:17.118 END TEST env_pci 00:04:17.118 ************************************ 00:04:17.118 10:15:30 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:17.118 10:15:30 env -- env/env.sh@15 -- # uname 00:04:17.118 10:15:30 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:17.118 10:15:30 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:17.118 10:15:30 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:17.118 10:15:30 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:17.118 10:15:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.118 10:15:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.118 ************************************ 00:04:17.118 START TEST env_dpdk_post_init 00:04:17.118 ************************************ 00:04:17.118 10:15:30 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:17.118 EAL: Detected CPU lcores: 10 00:04:17.118 EAL: Detected NUMA nodes: 1 00:04:17.118 EAL: Detected shared linkage of DPDK 00:04:17.118 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:17.118 EAL: Selected IOVA mode 'PA' 00:04:17.378 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:17.378 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:17.378 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:17.378 Starting DPDK initialization... 00:04:17.378 Starting SPDK post initialization... 00:04:17.378 SPDK NVMe probe 00:04:17.378 Attaching to 0000:00:10.0 00:04:17.378 Attaching to 0000:00:11.0 00:04:17.378 Attached to 0000:00:10.0 00:04:17.378 Attached to 0000:00:11.0 00:04:17.378 Cleaning up... 00:04:17.378 00:04:17.378 real 0m0.276s 00:04:17.378 user 0m0.082s 00:04:17.378 sys 0m0.094s 00:04:17.378 10:15:30 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.378 10:15:30 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:17.378 ************************************ 00:04:17.378 END TEST env_dpdk_post_init 00:04:17.378 ************************************ 00:04:17.378 10:15:31 env -- env/env.sh@26 -- # uname 00:04:17.378 10:15:31 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:17.378 10:15:31 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:17.378 10:15:31 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.378 10:15:31 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.378 10:15:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.378 ************************************ 00:04:17.378 START TEST env_mem_callbacks 00:04:17.378 ************************************ 00:04:17.378 10:15:31 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:17.378 EAL: Detected CPU lcores: 10 00:04:17.378 EAL: Detected NUMA nodes: 1 00:04:17.378 EAL: Detected shared linkage of DPDK 00:04:17.378 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:17.378 EAL: Selected IOVA mode 'PA' 00:04:17.638 00:04:17.638 00:04:17.638 CUnit - A unit testing framework for C - Version 2.1-3 00:04:17.638 http://cunit.sourceforge.net/ 00:04:17.638 00:04:17.638 00:04:17.638 Suite: memory 00:04:17.638 Test: test ... 00:04:17.638 register 0x200000200000 2097152 00:04:17.638 malloc 3145728 00:04:17.638 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:17.638 register 0x200000400000 4194304 00:04:17.638 buf 0x2000004fffc0 len 3145728 PASSED 00:04:17.638 malloc 64 00:04:17.638 buf 0x2000004ffec0 len 64 PASSED 00:04:17.638 malloc 4194304 00:04:17.638 register 0x200000800000 6291456 00:04:17.638 buf 0x2000009fffc0 len 4194304 PASSED 00:04:17.638 free 0x2000004fffc0 3145728 00:04:17.638 free 0x2000004ffec0 64 00:04:17.638 unregister 0x200000400000 4194304 PASSED 00:04:17.638 free 0x2000009fffc0 4194304 00:04:17.638 unregister 0x200000800000 6291456 PASSED 00:04:17.638 malloc 8388608 00:04:17.638 register 0x200000400000 10485760 00:04:17.638 buf 0x2000005fffc0 len 8388608 PASSED 00:04:17.638 free 0x2000005fffc0 8388608 00:04:17.638 unregister 0x200000400000 10485760 PASSED 00:04:17.638 passed 00:04:17.638 00:04:17.638 Run Summary: Type Total Ran Passed Failed Inactive 00:04:17.638 suites 1 1 n/a 0 0 00:04:17.638 tests 1 1 1 0 0 00:04:17.638 asserts 15 15 15 0 n/a 00:04:17.638 00:04:17.638 Elapsed time = 0.082 seconds 00:04:17.638 00:04:17.638 real 0m0.283s 00:04:17.638 user 0m0.115s 00:04:17.638 sys 0m0.065s 00:04:17.638 10:15:31 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.638 10:15:31 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:17.638 ************************************ 00:04:17.638 END TEST env_mem_callbacks 00:04:17.638 ************************************ 00:04:17.638 00:04:17.638 real 0m9.755s 00:04:17.638 user 0m8.010s 00:04:17.638 sys 0m1.402s 00:04:17.638 10:15:31 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.638 10:15:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:17.638 ************************************ 00:04:17.638 END TEST env 00:04:17.638 ************************************ 00:04:17.898 10:15:31 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:17.898 10:15:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.898 10:15:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.898 10:15:31 -- common/autotest_common.sh@10 -- # set +x 00:04:17.898 ************************************ 00:04:17.898 START TEST rpc 00:04:17.898 ************************************ 00:04:17.898 10:15:31 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:17.898 * Looking for test storage... 00:04:17.898 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:17.898 10:15:31 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:17.898 10:15:31 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:17.898 10:15:31 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:17.898 10:15:31 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:17.898 10:15:31 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:17.898 10:15:31 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:17.898 10:15:31 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:17.898 10:15:31 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.898 10:15:31 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:17.898 10:15:31 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:17.898 10:15:31 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:17.898 10:15:31 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:17.898 10:15:31 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:17.898 10:15:31 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:17.898 10:15:31 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:17.898 10:15:31 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:17.898 10:15:31 rpc -- scripts/common.sh@345 -- # : 1 00:04:17.898 10:15:31 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:17.898 10:15:31 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.898 10:15:31 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:17.898 10:15:31 rpc -- scripts/common.sh@353 -- # local d=1 00:04:17.898 10:15:31 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.898 10:15:31 rpc -- scripts/common.sh@355 -- # echo 1 00:04:17.898 10:15:31 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:17.898 10:15:31 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:18.158 10:15:31 rpc -- scripts/common.sh@353 -- # local d=2 00:04:18.158 10:15:31 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.158 10:15:31 rpc -- scripts/common.sh@355 -- # echo 2 00:04:18.158 10:15:31 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:18.158 10:15:31 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:18.158 10:15:31 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:18.158 10:15:31 rpc -- scripts/common.sh@368 -- # return 0 00:04:18.158 10:15:31 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.158 10:15:31 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:18.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.158 --rc genhtml_branch_coverage=1 00:04:18.158 --rc genhtml_function_coverage=1 00:04:18.158 --rc genhtml_legend=1 00:04:18.158 --rc geninfo_all_blocks=1 00:04:18.158 --rc geninfo_unexecuted_blocks=1 00:04:18.158 00:04:18.158 ' 00:04:18.158 10:15:31 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:18.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.158 --rc genhtml_branch_coverage=1 00:04:18.158 --rc genhtml_function_coverage=1 00:04:18.158 --rc genhtml_legend=1 00:04:18.158 --rc geninfo_all_blocks=1 00:04:18.158 --rc geninfo_unexecuted_blocks=1 00:04:18.158 00:04:18.158 ' 00:04:18.158 10:15:31 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:18.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.158 --rc genhtml_branch_coverage=1 00:04:18.158 --rc genhtml_function_coverage=1 00:04:18.158 --rc genhtml_legend=1 00:04:18.158 --rc geninfo_all_blocks=1 00:04:18.158 --rc geninfo_unexecuted_blocks=1 00:04:18.158 00:04:18.158 ' 00:04:18.158 10:15:31 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:18.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.158 --rc genhtml_branch_coverage=1 00:04:18.158 --rc genhtml_function_coverage=1 00:04:18.158 --rc genhtml_legend=1 00:04:18.158 --rc geninfo_all_blocks=1 00:04:18.158 --rc geninfo_unexecuted_blocks=1 00:04:18.158 00:04:18.158 ' 00:04:18.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.158 10:15:31 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56861 00:04:18.158 10:15:31 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:18.158 10:15:31 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.158 10:15:31 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56861 00:04:18.158 10:15:31 rpc -- common/autotest_common.sh@835 -- # '[' -z 56861 ']' 00:04:18.158 10:15:31 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.158 10:15:31 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:18.158 10:15:31 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.158 10:15:31 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:18.158 10:15:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.158 [2024-11-19 10:15:31.786716] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:18.158 [2024-11-19 10:15:31.786934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56861 ] 00:04:18.418 [2024-11-19 10:15:31.966244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.418 [2024-11-19 10:15:32.084190] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:18.418 [2024-11-19 10:15:32.084332] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56861' to capture a snapshot of events at runtime. 00:04:18.418 [2024-11-19 10:15:32.084373] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:18.418 [2024-11-19 10:15:32.084408] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:18.418 [2024-11-19 10:15:32.084428] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56861 for offline analysis/debug. 00:04:18.418 [2024-11-19 10:15:32.085709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.367 10:15:33 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:19.367 10:15:33 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:19.367 10:15:33 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:19.367 10:15:33 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:19.368 10:15:33 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:19.368 10:15:33 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:19.368 10:15:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.368 10:15:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.368 10:15:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.368 ************************************ 00:04:19.368 START TEST rpc_integrity 00:04:19.368 ************************************ 00:04:19.368 10:15:33 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:19.368 10:15:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:19.368 10:15:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.368 10:15:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.368 10:15:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.368 10:15:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:19.368 10:15:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:19.368 10:15:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:19.368 10:15:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:19.368 10:15:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.368 10:15:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.628 10:15:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.628 10:15:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:19.628 10:15:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:19.628 10:15:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.628 10:15:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.628 10:15:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.628 10:15:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:19.628 { 00:04:19.628 "name": "Malloc0", 00:04:19.628 "aliases": [ 00:04:19.628 "ac8d9de2-b186-469e-8e0d-48308dec2e96" 00:04:19.628 ], 00:04:19.628 "product_name": "Malloc disk", 00:04:19.628 "block_size": 512, 00:04:19.628 "num_blocks": 16384, 00:04:19.628 "uuid": "ac8d9de2-b186-469e-8e0d-48308dec2e96", 00:04:19.628 "assigned_rate_limits": { 00:04:19.628 "rw_ios_per_sec": 0, 00:04:19.628 "rw_mbytes_per_sec": 0, 00:04:19.628 "r_mbytes_per_sec": 0, 00:04:19.628 "w_mbytes_per_sec": 0 00:04:19.628 }, 00:04:19.628 "claimed": false, 00:04:19.628 "zoned": false, 00:04:19.628 "supported_io_types": { 00:04:19.628 "read": true, 00:04:19.628 "write": true, 00:04:19.628 "unmap": true, 00:04:19.628 "flush": true, 00:04:19.628 "reset": true, 00:04:19.628 "nvme_admin": false, 00:04:19.628 "nvme_io": false, 00:04:19.628 "nvme_io_md": false, 00:04:19.628 "write_zeroes": true, 00:04:19.628 "zcopy": true, 00:04:19.628 "get_zone_info": false, 00:04:19.628 "zone_management": false, 00:04:19.628 "zone_append": false, 00:04:19.628 "compare": false, 00:04:19.628 "compare_and_write": false, 00:04:19.628 "abort": true, 00:04:19.628 "seek_hole": false, 00:04:19.628 "seek_data": false, 00:04:19.628 "copy": true, 00:04:19.628 "nvme_iov_md": false 00:04:19.628 }, 00:04:19.628 "memory_domains": [ 00:04:19.628 { 00:04:19.628 "dma_device_id": "system", 00:04:19.628 "dma_device_type": 1 00:04:19.628 }, 00:04:19.628 { 00:04:19.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.628 "dma_device_type": 2 00:04:19.628 } 00:04:19.628 ], 00:04:19.629 "driver_specific": {} 00:04:19.629 } 00:04:19.629 ]' 00:04:19.629 10:15:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:19.629 10:15:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:19.629 10:15:33 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:19.629 10:15:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.629 10:15:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.629 [2024-11-19 10:15:33.236076] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:19.629 [2024-11-19 10:15:33.236181] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:19.629 [2024-11-19 10:15:33.236208] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:19.629 [2024-11-19 10:15:33.236222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:19.629 [2024-11-19 10:15:33.238546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:19.629 [2024-11-19 10:15:33.238587] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:19.629 Passthru0 00:04:19.629 10:15:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.629 10:15:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:19.629 10:15:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.629 10:15:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.629 10:15:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.629 10:15:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:19.629 { 00:04:19.629 "name": "Malloc0", 00:04:19.629 "aliases": [ 00:04:19.629 "ac8d9de2-b186-469e-8e0d-48308dec2e96" 00:04:19.629 ], 00:04:19.629 "product_name": "Malloc disk", 00:04:19.629 "block_size": 512, 00:04:19.629 "num_blocks": 16384, 00:04:19.629 "uuid": "ac8d9de2-b186-469e-8e0d-48308dec2e96", 00:04:19.629 "assigned_rate_limits": { 00:04:19.629 "rw_ios_per_sec": 0, 00:04:19.629 "rw_mbytes_per_sec": 0, 00:04:19.629 "r_mbytes_per_sec": 0, 00:04:19.629 "w_mbytes_per_sec": 0 00:04:19.629 }, 00:04:19.629 "claimed": true, 00:04:19.629 "claim_type": "exclusive_write", 00:04:19.629 "zoned": false, 00:04:19.629 "supported_io_types": { 00:04:19.629 "read": true, 00:04:19.629 "write": true, 00:04:19.629 "unmap": true, 00:04:19.629 "flush": true, 00:04:19.629 "reset": true, 00:04:19.629 "nvme_admin": false, 00:04:19.629 "nvme_io": false, 00:04:19.629 "nvme_io_md": false, 00:04:19.629 "write_zeroes": true, 00:04:19.629 "zcopy": true, 00:04:19.629 "get_zone_info": false, 00:04:19.629 "zone_management": false, 00:04:19.629 "zone_append": false, 00:04:19.629 "compare": false, 00:04:19.629 "compare_and_write": false, 00:04:19.629 "abort": true, 00:04:19.629 "seek_hole": false, 00:04:19.629 "seek_data": false, 00:04:19.629 "copy": true, 00:04:19.629 "nvme_iov_md": false 00:04:19.629 }, 00:04:19.629 "memory_domains": [ 00:04:19.629 { 00:04:19.629 "dma_device_id": "system", 00:04:19.629 "dma_device_type": 1 00:04:19.629 }, 00:04:19.629 { 00:04:19.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.629 "dma_device_type": 2 00:04:19.629 } 00:04:19.629 ], 00:04:19.629 "driver_specific": {} 00:04:19.629 }, 00:04:19.629 { 00:04:19.629 "name": "Passthru0", 00:04:19.629 "aliases": [ 00:04:19.629 "b018a34c-5d6c-5309-bfff-b4104640c0ef" 00:04:19.629 ], 00:04:19.629 "product_name": "passthru", 00:04:19.629 "block_size": 512, 00:04:19.629 "num_blocks": 16384, 00:04:19.629 "uuid": "b018a34c-5d6c-5309-bfff-b4104640c0ef", 00:04:19.629 "assigned_rate_limits": { 00:04:19.629 "rw_ios_per_sec": 0, 00:04:19.629 "rw_mbytes_per_sec": 0, 00:04:19.629 "r_mbytes_per_sec": 0, 00:04:19.629 "w_mbytes_per_sec": 0 00:04:19.629 }, 00:04:19.629 "claimed": false, 00:04:19.629 "zoned": false, 00:04:19.629 "supported_io_types": { 00:04:19.629 "read": true, 00:04:19.629 "write": true, 00:04:19.629 "unmap": true, 00:04:19.629 "flush": true, 00:04:19.629 "reset": true, 00:04:19.629 "nvme_admin": false, 00:04:19.629 "nvme_io": false, 00:04:19.629 "nvme_io_md": false, 00:04:19.629 "write_zeroes": true, 00:04:19.629 "zcopy": true, 00:04:19.629 "get_zone_info": false, 00:04:19.629 "zone_management": false, 00:04:19.629 "zone_append": false, 00:04:19.629 "compare": false, 00:04:19.629 "compare_and_write": false, 00:04:19.629 "abort": true, 00:04:19.629 "seek_hole": false, 00:04:19.629 "seek_data": false, 00:04:19.629 "copy": true, 00:04:19.629 "nvme_iov_md": false 00:04:19.629 }, 00:04:19.629 "memory_domains": [ 00:04:19.629 { 00:04:19.629 "dma_device_id": "system", 00:04:19.629 "dma_device_type": 1 00:04:19.629 }, 00:04:19.629 { 00:04:19.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.629 "dma_device_type": 2 00:04:19.629 } 00:04:19.629 ], 00:04:19.629 "driver_specific": { 00:04:19.629 "passthru": { 00:04:19.629 "name": "Passthru0", 00:04:19.629 "base_bdev_name": "Malloc0" 00:04:19.629 } 00:04:19.629 } 00:04:19.629 } 00:04:19.629 ]' 00:04:19.629 10:15:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:19.629 10:15:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:19.629 10:15:33 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:19.629 10:15:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.629 10:15:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.629 10:15:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.629 10:15:33 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:19.629 10:15:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.629 10:15:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.629 10:15:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.629 10:15:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:19.629 10:15:33 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.629 10:15:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.629 10:15:33 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.629 10:15:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:19.629 10:15:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:19.890 ************************************ 00:04:19.890 END TEST rpc_integrity 00:04:19.890 ************************************ 00:04:19.890 10:15:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:19.890 00:04:19.890 real 0m0.341s 00:04:19.890 user 0m0.187s 00:04:19.890 sys 0m0.051s 00:04:19.890 10:15:33 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.890 10:15:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:19.890 10:15:33 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:19.890 10:15:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.890 10:15:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.890 10:15:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:19.890 ************************************ 00:04:19.890 START TEST rpc_plugins 00:04:19.890 ************************************ 00:04:19.890 10:15:33 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:19.890 10:15:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:19.890 10:15:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.890 10:15:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:19.890 10:15:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.890 10:15:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:19.890 10:15:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:19.890 10:15:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.890 10:15:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:19.890 10:15:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.890 10:15:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:19.890 { 00:04:19.890 "name": "Malloc1", 00:04:19.890 "aliases": [ 00:04:19.890 "5a8c2850-839c-4246-9e1c-2308955ce191" 00:04:19.890 ], 00:04:19.890 "product_name": "Malloc disk", 00:04:19.890 "block_size": 4096, 00:04:19.890 "num_blocks": 256, 00:04:19.890 "uuid": "5a8c2850-839c-4246-9e1c-2308955ce191", 00:04:19.890 "assigned_rate_limits": { 00:04:19.890 "rw_ios_per_sec": 0, 00:04:19.890 "rw_mbytes_per_sec": 0, 00:04:19.890 "r_mbytes_per_sec": 0, 00:04:19.890 "w_mbytes_per_sec": 0 00:04:19.890 }, 00:04:19.891 "claimed": false, 00:04:19.891 "zoned": false, 00:04:19.891 "supported_io_types": { 00:04:19.891 "read": true, 00:04:19.891 "write": true, 00:04:19.891 "unmap": true, 00:04:19.891 "flush": true, 00:04:19.891 "reset": true, 00:04:19.891 "nvme_admin": false, 00:04:19.891 "nvme_io": false, 00:04:19.891 "nvme_io_md": false, 00:04:19.891 "write_zeroes": true, 00:04:19.891 "zcopy": true, 00:04:19.891 "get_zone_info": false, 00:04:19.891 "zone_management": false, 00:04:19.891 "zone_append": false, 00:04:19.891 "compare": false, 00:04:19.891 "compare_and_write": false, 00:04:19.891 "abort": true, 00:04:19.891 "seek_hole": false, 00:04:19.891 "seek_data": false, 00:04:19.891 "copy": true, 00:04:19.891 "nvme_iov_md": false 00:04:19.891 }, 00:04:19.891 "memory_domains": [ 00:04:19.891 { 00:04:19.891 "dma_device_id": "system", 00:04:19.891 "dma_device_type": 1 00:04:19.891 }, 00:04:19.891 { 00:04:19.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:19.891 "dma_device_type": 2 00:04:19.891 } 00:04:19.891 ], 00:04:19.891 "driver_specific": {} 00:04:19.891 } 00:04:19.891 ]' 00:04:19.891 10:15:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:19.891 10:15:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:19.891 10:15:33 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:19.891 10:15:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.891 10:15:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:19.891 10:15:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.891 10:15:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:19.891 10:15:33 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:19.891 10:15:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:19.891 10:15:33 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:19.891 10:15:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:19.891 10:15:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:19.891 ************************************ 00:04:19.891 END TEST rpc_plugins 00:04:19.891 ************************************ 00:04:19.891 10:15:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:19.891 00:04:19.891 real 0m0.176s 00:04:19.891 user 0m0.106s 00:04:19.891 sys 0m0.023s 00:04:19.891 10:15:33 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:19.891 10:15:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:20.152 10:15:33 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:20.152 10:15:33 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.152 10:15:33 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.152 10:15:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.152 ************************************ 00:04:20.152 START TEST rpc_trace_cmd_test 00:04:20.152 ************************************ 00:04:20.152 10:15:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:20.152 10:15:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:20.152 10:15:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:20.152 10:15:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.152 10:15:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:20.152 10:15:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.152 10:15:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:20.152 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56861", 00:04:20.152 "tpoint_group_mask": "0x8", 00:04:20.152 "iscsi_conn": { 00:04:20.152 "mask": "0x2", 00:04:20.152 "tpoint_mask": "0x0" 00:04:20.152 }, 00:04:20.152 "scsi": { 00:04:20.152 "mask": "0x4", 00:04:20.152 "tpoint_mask": "0x0" 00:04:20.152 }, 00:04:20.152 "bdev": { 00:04:20.152 "mask": "0x8", 00:04:20.152 "tpoint_mask": "0xffffffffffffffff" 00:04:20.152 }, 00:04:20.152 "nvmf_rdma": { 00:04:20.152 "mask": "0x10", 00:04:20.152 "tpoint_mask": "0x0" 00:04:20.152 }, 00:04:20.152 "nvmf_tcp": { 00:04:20.152 "mask": "0x20", 00:04:20.152 "tpoint_mask": "0x0" 00:04:20.152 }, 00:04:20.152 "ftl": { 00:04:20.152 "mask": "0x40", 00:04:20.152 "tpoint_mask": "0x0" 00:04:20.152 }, 00:04:20.152 "blobfs": { 00:04:20.152 "mask": "0x80", 00:04:20.152 "tpoint_mask": "0x0" 00:04:20.152 }, 00:04:20.152 "dsa": { 00:04:20.152 "mask": "0x200", 00:04:20.152 "tpoint_mask": "0x0" 00:04:20.152 }, 00:04:20.152 "thread": { 00:04:20.152 "mask": "0x400", 00:04:20.152 "tpoint_mask": "0x0" 00:04:20.152 }, 00:04:20.152 "nvme_pcie": { 00:04:20.152 "mask": "0x800", 00:04:20.152 "tpoint_mask": "0x0" 00:04:20.152 }, 00:04:20.152 "iaa": { 00:04:20.152 "mask": "0x1000", 00:04:20.152 "tpoint_mask": "0x0" 00:04:20.152 }, 00:04:20.152 "nvme_tcp": { 00:04:20.152 "mask": "0x2000", 00:04:20.152 "tpoint_mask": "0x0" 00:04:20.152 }, 00:04:20.152 "bdev_nvme": { 00:04:20.152 "mask": "0x4000", 00:04:20.152 "tpoint_mask": "0x0" 00:04:20.152 }, 00:04:20.152 "sock": { 00:04:20.152 "mask": "0x8000", 00:04:20.152 "tpoint_mask": "0x0" 00:04:20.152 }, 00:04:20.152 "blob": { 00:04:20.152 "mask": "0x10000", 00:04:20.152 "tpoint_mask": "0x0" 00:04:20.152 }, 00:04:20.152 "bdev_raid": { 00:04:20.152 "mask": "0x20000", 00:04:20.152 "tpoint_mask": "0x0" 00:04:20.152 }, 00:04:20.152 "scheduler": { 00:04:20.152 "mask": "0x40000", 00:04:20.152 "tpoint_mask": "0x0" 00:04:20.152 } 00:04:20.152 }' 00:04:20.152 10:15:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:20.152 10:15:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:20.152 10:15:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:20.152 10:15:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:20.152 10:15:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:20.152 10:15:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:20.152 10:15:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:20.412 10:15:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:20.412 10:15:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:20.412 ************************************ 00:04:20.412 END TEST rpc_trace_cmd_test 00:04:20.412 ************************************ 00:04:20.412 10:15:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:20.412 00:04:20.412 real 0m0.268s 00:04:20.412 user 0m0.218s 00:04:20.412 sys 0m0.040s 00:04:20.412 10:15:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.412 10:15:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:20.412 10:15:34 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:20.412 10:15:34 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:20.412 10:15:34 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:20.412 10:15:34 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.412 10:15:34 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.412 10:15:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.412 ************************************ 00:04:20.412 START TEST rpc_daemon_integrity 00:04:20.412 ************************************ 00:04:20.412 10:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:20.412 10:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:20.412 10:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.412 10:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.412 10:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.412 10:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:20.412 10:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:20.412 10:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:20.412 10:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:20.412 10:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.412 10:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.412 10:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.412 10:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:20.412 10:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:20.412 10:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.412 10:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.412 10:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.412 10:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:20.412 { 00:04:20.412 "name": "Malloc2", 00:04:20.412 "aliases": [ 00:04:20.412 "c1c21d4f-f9bf-4af0-a270-34bec1820cfa" 00:04:20.412 ], 00:04:20.412 "product_name": "Malloc disk", 00:04:20.412 "block_size": 512, 00:04:20.412 "num_blocks": 16384, 00:04:20.412 "uuid": "c1c21d4f-f9bf-4af0-a270-34bec1820cfa", 00:04:20.412 "assigned_rate_limits": { 00:04:20.412 "rw_ios_per_sec": 0, 00:04:20.412 "rw_mbytes_per_sec": 0, 00:04:20.412 "r_mbytes_per_sec": 0, 00:04:20.412 "w_mbytes_per_sec": 0 00:04:20.412 }, 00:04:20.412 "claimed": false, 00:04:20.412 "zoned": false, 00:04:20.412 "supported_io_types": { 00:04:20.412 "read": true, 00:04:20.412 "write": true, 00:04:20.413 "unmap": true, 00:04:20.413 "flush": true, 00:04:20.413 "reset": true, 00:04:20.413 "nvme_admin": false, 00:04:20.413 "nvme_io": false, 00:04:20.413 "nvme_io_md": false, 00:04:20.413 "write_zeroes": true, 00:04:20.413 "zcopy": true, 00:04:20.413 "get_zone_info": false, 00:04:20.413 "zone_management": false, 00:04:20.413 "zone_append": false, 00:04:20.413 "compare": false, 00:04:20.413 "compare_and_write": false, 00:04:20.413 "abort": true, 00:04:20.413 "seek_hole": false, 00:04:20.413 "seek_data": false, 00:04:20.413 "copy": true, 00:04:20.413 "nvme_iov_md": false 00:04:20.413 }, 00:04:20.413 "memory_domains": [ 00:04:20.413 { 00:04:20.413 "dma_device_id": "system", 00:04:20.413 "dma_device_type": 1 00:04:20.413 }, 00:04:20.413 { 00:04:20.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.413 "dma_device_type": 2 00:04:20.413 } 00:04:20.413 ], 00:04:20.413 "driver_specific": {} 00:04:20.413 } 00:04:20.413 ]' 00:04:20.413 10:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:20.672 10:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:20.672 10:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:20.672 10:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.672 10:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.672 [2024-11-19 10:15:34.212611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:20.672 [2024-11-19 10:15:34.212670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:20.672 [2024-11-19 10:15:34.212691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:20.672 [2024-11-19 10:15:34.212702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:20.672 [2024-11-19 10:15:34.214944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:20.672 [2024-11-19 10:15:34.214987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:20.672 Passthru0 00:04:20.672 10:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.672 10:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:20.672 10:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.672 10:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.672 10:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.672 10:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:20.672 { 00:04:20.672 "name": "Malloc2", 00:04:20.673 "aliases": [ 00:04:20.673 "c1c21d4f-f9bf-4af0-a270-34bec1820cfa" 00:04:20.673 ], 00:04:20.673 "product_name": "Malloc disk", 00:04:20.673 "block_size": 512, 00:04:20.673 "num_blocks": 16384, 00:04:20.673 "uuid": "c1c21d4f-f9bf-4af0-a270-34bec1820cfa", 00:04:20.673 "assigned_rate_limits": { 00:04:20.673 "rw_ios_per_sec": 0, 00:04:20.673 "rw_mbytes_per_sec": 0, 00:04:20.673 "r_mbytes_per_sec": 0, 00:04:20.673 "w_mbytes_per_sec": 0 00:04:20.673 }, 00:04:20.673 "claimed": true, 00:04:20.673 "claim_type": "exclusive_write", 00:04:20.673 "zoned": false, 00:04:20.673 "supported_io_types": { 00:04:20.673 "read": true, 00:04:20.673 "write": true, 00:04:20.673 "unmap": true, 00:04:20.673 "flush": true, 00:04:20.673 "reset": true, 00:04:20.673 "nvme_admin": false, 00:04:20.673 "nvme_io": false, 00:04:20.673 "nvme_io_md": false, 00:04:20.673 "write_zeroes": true, 00:04:20.673 "zcopy": true, 00:04:20.673 "get_zone_info": false, 00:04:20.673 "zone_management": false, 00:04:20.673 "zone_append": false, 00:04:20.673 "compare": false, 00:04:20.673 "compare_and_write": false, 00:04:20.673 "abort": true, 00:04:20.673 "seek_hole": false, 00:04:20.673 "seek_data": false, 00:04:20.673 "copy": true, 00:04:20.673 "nvme_iov_md": false 00:04:20.673 }, 00:04:20.673 "memory_domains": [ 00:04:20.673 { 00:04:20.673 "dma_device_id": "system", 00:04:20.673 "dma_device_type": 1 00:04:20.673 }, 00:04:20.673 { 00:04:20.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.673 "dma_device_type": 2 00:04:20.673 } 00:04:20.673 ], 00:04:20.673 "driver_specific": {} 00:04:20.673 }, 00:04:20.673 { 00:04:20.673 "name": "Passthru0", 00:04:20.673 "aliases": [ 00:04:20.673 "59655ad7-c821-550d-8099-be5fada8a7e7" 00:04:20.673 ], 00:04:20.673 "product_name": "passthru", 00:04:20.673 "block_size": 512, 00:04:20.673 "num_blocks": 16384, 00:04:20.673 "uuid": "59655ad7-c821-550d-8099-be5fada8a7e7", 00:04:20.673 "assigned_rate_limits": { 00:04:20.673 "rw_ios_per_sec": 0, 00:04:20.673 "rw_mbytes_per_sec": 0, 00:04:20.673 "r_mbytes_per_sec": 0, 00:04:20.673 "w_mbytes_per_sec": 0 00:04:20.673 }, 00:04:20.673 "claimed": false, 00:04:20.673 "zoned": false, 00:04:20.673 "supported_io_types": { 00:04:20.673 "read": true, 00:04:20.673 "write": true, 00:04:20.673 "unmap": true, 00:04:20.673 "flush": true, 00:04:20.673 "reset": true, 00:04:20.673 "nvme_admin": false, 00:04:20.673 "nvme_io": false, 00:04:20.673 "nvme_io_md": false, 00:04:20.673 "write_zeroes": true, 00:04:20.673 "zcopy": true, 00:04:20.673 "get_zone_info": false, 00:04:20.673 "zone_management": false, 00:04:20.673 "zone_append": false, 00:04:20.673 "compare": false, 00:04:20.673 "compare_and_write": false, 00:04:20.673 "abort": true, 00:04:20.673 "seek_hole": false, 00:04:20.673 "seek_data": false, 00:04:20.673 "copy": true, 00:04:20.673 "nvme_iov_md": false 00:04:20.673 }, 00:04:20.673 "memory_domains": [ 00:04:20.673 { 00:04:20.673 "dma_device_id": "system", 00:04:20.673 "dma_device_type": 1 00:04:20.673 }, 00:04:20.673 { 00:04:20.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:20.673 "dma_device_type": 2 00:04:20.673 } 00:04:20.673 ], 00:04:20.673 "driver_specific": { 00:04:20.673 "passthru": { 00:04:20.673 "name": "Passthru0", 00:04:20.673 "base_bdev_name": "Malloc2" 00:04:20.673 } 00:04:20.673 } 00:04:20.673 } 00:04:20.673 ]' 00:04:20.673 10:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:20.673 10:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:20.673 10:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:20.673 10:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.673 10:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.673 10:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.673 10:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:20.673 10:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.673 10:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.673 10:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.673 10:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:20.673 10:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:20.673 10:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.673 10:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:20.673 10:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:20.673 10:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:20.673 ************************************ 00:04:20.673 END TEST rpc_daemon_integrity 00:04:20.673 ************************************ 00:04:20.673 10:15:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:20.673 00:04:20.673 real 0m0.361s 00:04:20.673 user 0m0.195s 00:04:20.673 sys 0m0.059s 00:04:20.673 10:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.673 10:15:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:20.933 10:15:34 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:20.933 10:15:34 rpc -- rpc/rpc.sh@84 -- # killprocess 56861 00:04:20.933 10:15:34 rpc -- common/autotest_common.sh@954 -- # '[' -z 56861 ']' 00:04:20.933 10:15:34 rpc -- common/autotest_common.sh@958 -- # kill -0 56861 00:04:20.933 10:15:34 rpc -- common/autotest_common.sh@959 -- # uname 00:04:20.933 10:15:34 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:20.933 10:15:34 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56861 00:04:20.933 killing process with pid 56861 00:04:20.933 10:15:34 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:20.933 10:15:34 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:20.933 10:15:34 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56861' 00:04:20.933 10:15:34 rpc -- common/autotest_common.sh@973 -- # kill 56861 00:04:20.933 10:15:34 rpc -- common/autotest_common.sh@978 -- # wait 56861 00:04:23.472 00:04:23.472 real 0m5.320s 00:04:23.472 user 0m5.807s 00:04:23.472 sys 0m1.003s 00:04:23.472 10:15:36 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:23.472 ************************************ 00:04:23.472 END TEST rpc 00:04:23.472 ************************************ 00:04:23.472 10:15:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.472 10:15:36 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:23.472 10:15:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.472 10:15:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.472 10:15:36 -- common/autotest_common.sh@10 -- # set +x 00:04:23.472 ************************************ 00:04:23.472 START TEST skip_rpc 00:04:23.472 ************************************ 00:04:23.472 10:15:36 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:23.472 * Looking for test storage... 00:04:23.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:23.472 10:15:36 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:23.472 10:15:36 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:23.472 10:15:36 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:23.472 10:15:37 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:23.472 10:15:37 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.472 10:15:37 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.472 10:15:37 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.472 10:15:37 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.472 10:15:37 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.472 10:15:37 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.472 10:15:37 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.472 10:15:37 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.472 10:15:37 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.472 10:15:37 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.472 10:15:37 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.472 10:15:37 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:23.472 10:15:37 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:23.472 10:15:37 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.472 10:15:37 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.472 10:15:37 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:23.473 10:15:37 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:23.473 10:15:37 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.473 10:15:37 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:23.473 10:15:37 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.473 10:15:37 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:23.473 10:15:37 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:23.473 10:15:37 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.473 10:15:37 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:23.473 10:15:37 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.473 10:15:37 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.473 10:15:37 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.473 10:15:37 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:23.473 10:15:37 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.473 10:15:37 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:23.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.473 --rc genhtml_branch_coverage=1 00:04:23.473 --rc genhtml_function_coverage=1 00:04:23.473 --rc genhtml_legend=1 00:04:23.473 --rc geninfo_all_blocks=1 00:04:23.473 --rc geninfo_unexecuted_blocks=1 00:04:23.473 00:04:23.473 ' 00:04:23.473 10:15:37 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:23.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.473 --rc genhtml_branch_coverage=1 00:04:23.473 --rc genhtml_function_coverage=1 00:04:23.473 --rc genhtml_legend=1 00:04:23.473 --rc geninfo_all_blocks=1 00:04:23.473 --rc geninfo_unexecuted_blocks=1 00:04:23.473 00:04:23.473 ' 00:04:23.473 10:15:37 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:23.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.473 --rc genhtml_branch_coverage=1 00:04:23.473 --rc genhtml_function_coverage=1 00:04:23.473 --rc genhtml_legend=1 00:04:23.473 --rc geninfo_all_blocks=1 00:04:23.473 --rc geninfo_unexecuted_blocks=1 00:04:23.473 00:04:23.473 ' 00:04:23.473 10:15:37 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:23.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.473 --rc genhtml_branch_coverage=1 00:04:23.473 --rc genhtml_function_coverage=1 00:04:23.473 --rc genhtml_legend=1 00:04:23.473 --rc geninfo_all_blocks=1 00:04:23.473 --rc geninfo_unexecuted_blocks=1 00:04:23.473 00:04:23.473 ' 00:04:23.473 10:15:37 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:23.473 10:15:37 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:23.473 10:15:37 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:23.473 10:15:37 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.473 10:15:37 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.473 10:15:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:23.473 ************************************ 00:04:23.473 START TEST skip_rpc 00:04:23.473 ************************************ 00:04:23.473 10:15:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:23.473 10:15:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57090 00:04:23.473 10:15:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.473 10:15:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:23.473 10:15:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:23.473 [2024-11-19 10:15:37.188172] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:23.473 [2024-11-19 10:15:37.188389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57090 ] 00:04:23.732 [2024-11-19 10:15:37.360800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.732 [2024-11-19 10:15:37.473188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.040 10:15:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:29.040 10:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:29.040 10:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:29.040 10:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:29.040 10:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.040 10:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:29.040 10:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.040 10:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:29.040 10:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:29.040 10:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.040 10:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:29.040 10:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:29.040 10:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:29.040 10:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:29.040 10:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:29.040 10:15:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:29.040 10:15:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57090 00:04:29.040 10:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57090 ']' 00:04:29.040 10:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57090 00:04:29.040 10:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:29.040 10:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:29.040 10:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57090 00:04:29.040 10:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:29.040 10:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:29.040 10:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57090' 00:04:29.040 killing process with pid 57090 00:04:29.040 10:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57090 00:04:29.040 10:15:42 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57090 00:04:30.950 00:04:30.950 real 0m7.384s 00:04:30.950 user 0m6.933s 00:04:30.950 sys 0m0.369s 00:04:30.950 10:15:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.950 ************************************ 00:04:30.950 END TEST skip_rpc 00:04:30.950 ************************************ 00:04:30.950 10:15:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.950 10:15:44 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:30.950 10:15:44 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.950 10:15:44 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.950 10:15:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.950 ************************************ 00:04:30.950 START TEST skip_rpc_with_json 00:04:30.950 ************************************ 00:04:30.950 10:15:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:30.950 10:15:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:30.950 10:15:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57194 00:04:30.950 10:15:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:30.950 10:15:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:30.950 10:15:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57194 00:04:30.950 10:15:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57194 ']' 00:04:30.950 10:15:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.950 10:15:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.950 10:15:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.950 10:15:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.950 10:15:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:30.950 [2024-11-19 10:15:44.638565] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:30.950 [2024-11-19 10:15:44.638764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57194 ] 00:04:31.209 [2024-11-19 10:15:44.814652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.209 [2024-11-19 10:15:44.929522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.150 10:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.150 10:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:32.150 10:15:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:32.150 10:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.150 10:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:32.150 [2024-11-19 10:15:45.779543] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:32.150 request: 00:04:32.150 { 00:04:32.150 "trtype": "tcp", 00:04:32.150 "method": "nvmf_get_transports", 00:04:32.150 "req_id": 1 00:04:32.150 } 00:04:32.150 Got JSON-RPC error response 00:04:32.150 response: 00:04:32.150 { 00:04:32.150 "code": -19, 00:04:32.150 "message": "No such device" 00:04:32.150 } 00:04:32.150 10:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:32.150 10:15:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:32.150 10:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.150 10:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:32.150 [2024-11-19 10:15:45.791618] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:32.150 10:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.150 10:15:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:32.150 10:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.150 10:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:32.410 10:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.410 10:15:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:32.410 { 00:04:32.410 "subsystems": [ 00:04:32.410 { 00:04:32.410 "subsystem": "fsdev", 00:04:32.410 "config": [ 00:04:32.410 { 00:04:32.410 "method": "fsdev_set_opts", 00:04:32.410 "params": { 00:04:32.410 "fsdev_io_pool_size": 65535, 00:04:32.410 "fsdev_io_cache_size": 256 00:04:32.410 } 00:04:32.410 } 00:04:32.410 ] 00:04:32.410 }, 00:04:32.410 { 00:04:32.410 "subsystem": "keyring", 00:04:32.410 "config": [] 00:04:32.410 }, 00:04:32.410 { 00:04:32.410 "subsystem": "iobuf", 00:04:32.410 "config": [ 00:04:32.410 { 00:04:32.410 "method": "iobuf_set_options", 00:04:32.410 "params": { 00:04:32.410 "small_pool_count": 8192, 00:04:32.410 "large_pool_count": 1024, 00:04:32.410 "small_bufsize": 8192, 00:04:32.410 "large_bufsize": 135168, 00:04:32.410 "enable_numa": false 00:04:32.410 } 00:04:32.410 } 00:04:32.410 ] 00:04:32.410 }, 00:04:32.410 { 00:04:32.410 "subsystem": "sock", 00:04:32.410 "config": [ 00:04:32.410 { 00:04:32.410 "method": "sock_set_default_impl", 00:04:32.410 "params": { 00:04:32.410 "impl_name": "posix" 00:04:32.410 } 00:04:32.410 }, 00:04:32.410 { 00:04:32.410 "method": "sock_impl_set_options", 00:04:32.410 "params": { 00:04:32.410 "impl_name": "ssl", 00:04:32.410 "recv_buf_size": 4096, 00:04:32.410 "send_buf_size": 4096, 00:04:32.410 "enable_recv_pipe": true, 00:04:32.410 "enable_quickack": false, 00:04:32.410 "enable_placement_id": 0, 00:04:32.410 "enable_zerocopy_send_server": true, 00:04:32.410 "enable_zerocopy_send_client": false, 00:04:32.410 "zerocopy_threshold": 0, 00:04:32.410 "tls_version": 0, 00:04:32.410 "enable_ktls": false 00:04:32.410 } 00:04:32.410 }, 00:04:32.410 { 00:04:32.410 "method": "sock_impl_set_options", 00:04:32.410 "params": { 00:04:32.410 "impl_name": "posix", 00:04:32.410 "recv_buf_size": 2097152, 00:04:32.410 "send_buf_size": 2097152, 00:04:32.410 "enable_recv_pipe": true, 00:04:32.410 "enable_quickack": false, 00:04:32.410 "enable_placement_id": 0, 00:04:32.410 "enable_zerocopy_send_server": true, 00:04:32.410 "enable_zerocopy_send_client": false, 00:04:32.410 "zerocopy_threshold": 0, 00:04:32.410 "tls_version": 0, 00:04:32.410 "enable_ktls": false 00:04:32.410 } 00:04:32.410 } 00:04:32.410 ] 00:04:32.410 }, 00:04:32.410 { 00:04:32.410 "subsystem": "vmd", 00:04:32.410 "config": [] 00:04:32.410 }, 00:04:32.410 { 00:04:32.410 "subsystem": "accel", 00:04:32.410 "config": [ 00:04:32.410 { 00:04:32.410 "method": "accel_set_options", 00:04:32.410 "params": { 00:04:32.410 "small_cache_size": 128, 00:04:32.410 "large_cache_size": 16, 00:04:32.410 "task_count": 2048, 00:04:32.410 "sequence_count": 2048, 00:04:32.410 "buf_count": 2048 00:04:32.410 } 00:04:32.410 } 00:04:32.410 ] 00:04:32.410 }, 00:04:32.410 { 00:04:32.410 "subsystem": "bdev", 00:04:32.410 "config": [ 00:04:32.410 { 00:04:32.410 "method": "bdev_set_options", 00:04:32.410 "params": { 00:04:32.410 "bdev_io_pool_size": 65535, 00:04:32.410 "bdev_io_cache_size": 256, 00:04:32.410 "bdev_auto_examine": true, 00:04:32.410 "iobuf_small_cache_size": 128, 00:04:32.410 "iobuf_large_cache_size": 16 00:04:32.410 } 00:04:32.410 }, 00:04:32.410 { 00:04:32.410 "method": "bdev_raid_set_options", 00:04:32.410 "params": { 00:04:32.410 "process_window_size_kb": 1024, 00:04:32.410 "process_max_bandwidth_mb_sec": 0 00:04:32.410 } 00:04:32.410 }, 00:04:32.410 { 00:04:32.410 "method": "bdev_iscsi_set_options", 00:04:32.410 "params": { 00:04:32.410 "timeout_sec": 30 00:04:32.410 } 00:04:32.410 }, 00:04:32.410 { 00:04:32.410 "method": "bdev_nvme_set_options", 00:04:32.410 "params": { 00:04:32.410 "action_on_timeout": "none", 00:04:32.410 "timeout_us": 0, 00:04:32.410 "timeout_admin_us": 0, 00:04:32.410 "keep_alive_timeout_ms": 10000, 00:04:32.410 "arbitration_burst": 0, 00:04:32.410 "low_priority_weight": 0, 00:04:32.410 "medium_priority_weight": 0, 00:04:32.410 "high_priority_weight": 0, 00:04:32.410 "nvme_adminq_poll_period_us": 10000, 00:04:32.410 "nvme_ioq_poll_period_us": 0, 00:04:32.410 "io_queue_requests": 0, 00:04:32.410 "delay_cmd_submit": true, 00:04:32.410 "transport_retry_count": 4, 00:04:32.410 "bdev_retry_count": 3, 00:04:32.410 "transport_ack_timeout": 0, 00:04:32.410 "ctrlr_loss_timeout_sec": 0, 00:04:32.410 "reconnect_delay_sec": 0, 00:04:32.410 "fast_io_fail_timeout_sec": 0, 00:04:32.410 "disable_auto_failback": false, 00:04:32.410 "generate_uuids": false, 00:04:32.410 "transport_tos": 0, 00:04:32.410 "nvme_error_stat": false, 00:04:32.410 "rdma_srq_size": 0, 00:04:32.410 "io_path_stat": false, 00:04:32.410 "allow_accel_sequence": false, 00:04:32.410 "rdma_max_cq_size": 0, 00:04:32.410 "rdma_cm_event_timeout_ms": 0, 00:04:32.410 "dhchap_digests": [ 00:04:32.410 "sha256", 00:04:32.410 "sha384", 00:04:32.410 "sha512" 00:04:32.410 ], 00:04:32.410 "dhchap_dhgroups": [ 00:04:32.410 "null", 00:04:32.410 "ffdhe2048", 00:04:32.410 "ffdhe3072", 00:04:32.410 "ffdhe4096", 00:04:32.410 "ffdhe6144", 00:04:32.410 "ffdhe8192" 00:04:32.410 ] 00:04:32.410 } 00:04:32.410 }, 00:04:32.410 { 00:04:32.410 "method": "bdev_nvme_set_hotplug", 00:04:32.410 "params": { 00:04:32.410 "period_us": 100000, 00:04:32.410 "enable": false 00:04:32.410 } 00:04:32.410 }, 00:04:32.410 { 00:04:32.410 "method": "bdev_wait_for_examine" 00:04:32.410 } 00:04:32.410 ] 00:04:32.410 }, 00:04:32.410 { 00:04:32.410 "subsystem": "scsi", 00:04:32.410 "config": null 00:04:32.410 }, 00:04:32.410 { 00:04:32.410 "subsystem": "scheduler", 00:04:32.410 "config": [ 00:04:32.410 { 00:04:32.410 "method": "framework_set_scheduler", 00:04:32.410 "params": { 00:04:32.410 "name": "static" 00:04:32.410 } 00:04:32.410 } 00:04:32.410 ] 00:04:32.410 }, 00:04:32.410 { 00:04:32.410 "subsystem": "vhost_scsi", 00:04:32.410 "config": [] 00:04:32.410 }, 00:04:32.410 { 00:04:32.410 "subsystem": "vhost_blk", 00:04:32.410 "config": [] 00:04:32.410 }, 00:04:32.410 { 00:04:32.410 "subsystem": "ublk", 00:04:32.410 "config": [] 00:04:32.410 }, 00:04:32.410 { 00:04:32.410 "subsystem": "nbd", 00:04:32.410 "config": [] 00:04:32.410 }, 00:04:32.410 { 00:04:32.410 "subsystem": "nvmf", 00:04:32.410 "config": [ 00:04:32.410 { 00:04:32.410 "method": "nvmf_set_config", 00:04:32.410 "params": { 00:04:32.410 "discovery_filter": "match_any", 00:04:32.410 "admin_cmd_passthru": { 00:04:32.410 "identify_ctrlr": false 00:04:32.410 }, 00:04:32.410 "dhchap_digests": [ 00:04:32.410 "sha256", 00:04:32.410 "sha384", 00:04:32.410 "sha512" 00:04:32.410 ], 00:04:32.410 "dhchap_dhgroups": [ 00:04:32.410 "null", 00:04:32.410 "ffdhe2048", 00:04:32.410 "ffdhe3072", 00:04:32.410 "ffdhe4096", 00:04:32.410 "ffdhe6144", 00:04:32.410 "ffdhe8192" 00:04:32.410 ] 00:04:32.410 } 00:04:32.410 }, 00:04:32.410 { 00:04:32.410 "method": "nvmf_set_max_subsystems", 00:04:32.410 "params": { 00:04:32.410 "max_subsystems": 1024 00:04:32.410 } 00:04:32.410 }, 00:04:32.410 { 00:04:32.411 "method": "nvmf_set_crdt", 00:04:32.411 "params": { 00:04:32.411 "crdt1": 0, 00:04:32.411 "crdt2": 0, 00:04:32.411 "crdt3": 0 00:04:32.411 } 00:04:32.411 }, 00:04:32.411 { 00:04:32.411 "method": "nvmf_create_transport", 00:04:32.411 "params": { 00:04:32.411 "trtype": "TCP", 00:04:32.411 "max_queue_depth": 128, 00:04:32.411 "max_io_qpairs_per_ctrlr": 127, 00:04:32.411 "in_capsule_data_size": 4096, 00:04:32.411 "max_io_size": 131072, 00:04:32.411 "io_unit_size": 131072, 00:04:32.411 "max_aq_depth": 128, 00:04:32.411 "num_shared_buffers": 511, 00:04:32.411 "buf_cache_size": 4294967295, 00:04:32.411 "dif_insert_or_strip": false, 00:04:32.411 "zcopy": false, 00:04:32.411 "c2h_success": true, 00:04:32.411 "sock_priority": 0, 00:04:32.411 "abort_timeout_sec": 1, 00:04:32.411 "ack_timeout": 0, 00:04:32.411 "data_wr_pool_size": 0 00:04:32.411 } 00:04:32.411 } 00:04:32.411 ] 00:04:32.411 }, 00:04:32.411 { 00:04:32.411 "subsystem": "iscsi", 00:04:32.411 "config": [ 00:04:32.411 { 00:04:32.411 "method": "iscsi_set_options", 00:04:32.411 "params": { 00:04:32.411 "node_base": "iqn.2016-06.io.spdk", 00:04:32.411 "max_sessions": 128, 00:04:32.411 "max_connections_per_session": 2, 00:04:32.411 "max_queue_depth": 64, 00:04:32.411 "default_time2wait": 2, 00:04:32.411 "default_time2retain": 20, 00:04:32.411 "first_burst_length": 8192, 00:04:32.411 "immediate_data": true, 00:04:32.411 "allow_duplicated_isid": false, 00:04:32.411 "error_recovery_level": 0, 00:04:32.411 "nop_timeout": 60, 00:04:32.411 "nop_in_interval": 30, 00:04:32.411 "disable_chap": false, 00:04:32.411 "require_chap": false, 00:04:32.411 "mutual_chap": false, 00:04:32.411 "chap_group": 0, 00:04:32.411 "max_large_datain_per_connection": 64, 00:04:32.411 "max_r2t_per_connection": 4, 00:04:32.411 "pdu_pool_size": 36864, 00:04:32.411 "immediate_data_pool_size": 16384, 00:04:32.411 "data_out_pool_size": 2048 00:04:32.411 } 00:04:32.411 } 00:04:32.411 ] 00:04:32.411 } 00:04:32.411 ] 00:04:32.411 } 00:04:32.411 10:15:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:32.411 10:15:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57194 00:04:32.411 10:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57194 ']' 00:04:32.411 10:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57194 00:04:32.411 10:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:32.411 10:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:32.411 10:15:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57194 00:04:32.411 10:15:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:32.411 10:15:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:32.411 10:15:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57194' 00:04:32.411 killing process with pid 57194 00:04:32.411 10:15:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57194 00:04:32.411 10:15:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57194 00:04:34.951 10:15:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57250 00:04:34.951 10:15:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:34.951 10:15:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:40.235 10:15:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57250 00:04:40.235 10:15:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57250 ']' 00:04:40.235 10:15:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57250 00:04:40.235 10:15:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:40.235 10:15:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.235 10:15:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57250 00:04:40.235 killing process with pid 57250 00:04:40.235 10:15:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.235 10:15:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.235 10:15:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57250' 00:04:40.235 10:15:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57250 00:04:40.235 10:15:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57250 00:04:42.143 10:15:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:42.143 10:15:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:42.143 ************************************ 00:04:42.143 END TEST skip_rpc_with_json 00:04:42.143 ************************************ 00:04:42.143 00:04:42.143 real 0m11.147s 00:04:42.143 user 0m10.636s 00:04:42.143 sys 0m0.817s 00:04:42.143 10:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.143 10:15:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.143 10:15:55 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:42.143 10:15:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.143 10:15:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.143 10:15:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.143 ************************************ 00:04:42.143 START TEST skip_rpc_with_delay 00:04:42.143 ************************************ 00:04:42.143 10:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:42.143 10:15:55 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:42.143 10:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:42.143 10:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:42.143 10:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.143 10:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.143 10:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.143 10:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.143 10:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.143 10:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:42.143 10:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.143 10:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:42.143 10:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:42.143 [2024-11-19 10:15:55.851373] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:42.143 10:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:42.143 10:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:42.143 10:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:42.143 10:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:42.143 00:04:42.143 real 0m0.169s 00:04:42.143 user 0m0.095s 00:04:42.143 sys 0m0.073s 00:04:42.143 ************************************ 00:04:42.143 END TEST skip_rpc_with_delay 00:04:42.143 ************************************ 00:04:42.143 10:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.143 10:15:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:42.403 10:15:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:42.403 10:15:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:42.404 10:15:55 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:42.404 10:15:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.404 10:15:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.404 10:15:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.404 ************************************ 00:04:42.404 START TEST exit_on_failed_rpc_init 00:04:42.404 ************************************ 00:04:42.404 10:15:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:42.404 10:15:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57378 00:04:42.404 10:15:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.404 10:15:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57378 00:04:42.404 10:15:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57378 ']' 00:04:42.404 10:15:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.404 10:15:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.404 10:15:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.404 10:15:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.404 10:15:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:42.404 [2024-11-19 10:15:56.083249] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:42.404 [2024-11-19 10:15:56.083478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57378 ] 00:04:42.665 [2024-11-19 10:15:56.253941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.665 [2024-11-19 10:15:56.365718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.620 10:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.620 10:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:43.620 10:15:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.620 10:15:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:43.620 10:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:43.620 10:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:43.620 10:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:43.620 10:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.620 10:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:43.620 10:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.620 10:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:43.620 10:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:43.620 10:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:43.620 10:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:43.620 10:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:43.620 [2024-11-19 10:15:57.325103] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:43.620 [2024-11-19 10:15:57.325344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57402 ] 00:04:43.879 [2024-11-19 10:15:57.492003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.879 [2024-11-19 10:15:57.610750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.879 [2024-11-19 10:15:57.610922] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:43.879 [2024-11-19 10:15:57.610972] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:43.879 [2024-11-19 10:15:57.611005] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:44.139 10:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:44.139 10:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:44.139 10:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:44.139 10:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:44.139 10:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:44.139 10:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:44.139 10:15:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:44.139 10:15:57 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57378 00:04:44.139 10:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57378 ']' 00:04:44.139 10:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57378 00:04:44.139 10:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:44.139 10:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.139 10:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57378 00:04:44.398 killing process with pid 57378 00:04:44.398 10:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:44.398 10:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:44.398 10:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57378' 00:04:44.398 10:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57378 00:04:44.398 10:15:57 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57378 00:04:46.939 ************************************ 00:04:46.939 END TEST exit_on_failed_rpc_init 00:04:46.939 ************************************ 00:04:46.939 00:04:46.939 real 0m4.280s 00:04:46.939 user 0m4.594s 00:04:46.939 sys 0m0.589s 00:04:46.939 10:16:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.939 10:16:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:46.939 10:16:00 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:46.939 ************************************ 00:04:46.939 END TEST skip_rpc 00:04:46.939 ************************************ 00:04:46.939 00:04:46.939 real 0m23.468s 00:04:46.939 user 0m22.453s 00:04:46.939 sys 0m2.151s 00:04:46.939 10:16:00 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.939 10:16:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.939 10:16:00 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:46.939 10:16:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.939 10:16:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.939 10:16:00 -- common/autotest_common.sh@10 -- # set +x 00:04:46.939 ************************************ 00:04:46.939 START TEST rpc_client 00:04:46.939 ************************************ 00:04:46.939 10:16:00 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:46.939 * Looking for test storage... 00:04:46.939 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:46.939 10:16:00 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:46.939 10:16:00 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:46.939 10:16:00 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:46.939 10:16:00 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:46.939 10:16:00 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.939 10:16:00 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.939 10:16:00 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.939 10:16:00 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.939 10:16:00 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.939 10:16:00 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.939 10:16:00 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.939 10:16:00 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.939 10:16:00 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.939 10:16:00 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.939 10:16:00 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.939 10:16:00 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:46.939 10:16:00 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:46.940 10:16:00 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.940 10:16:00 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.940 10:16:00 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:46.940 10:16:00 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:46.940 10:16:00 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.940 10:16:00 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:46.940 10:16:00 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.940 10:16:00 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:46.940 10:16:00 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:46.940 10:16:00 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.940 10:16:00 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:46.940 10:16:00 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.940 10:16:00 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.940 10:16:00 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.940 10:16:00 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:46.940 10:16:00 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.940 10:16:00 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:46.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.940 --rc genhtml_branch_coverage=1 00:04:46.940 --rc genhtml_function_coverage=1 00:04:46.940 --rc genhtml_legend=1 00:04:46.940 --rc geninfo_all_blocks=1 00:04:46.940 --rc geninfo_unexecuted_blocks=1 00:04:46.940 00:04:46.940 ' 00:04:46.940 10:16:00 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:46.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.940 --rc genhtml_branch_coverage=1 00:04:46.940 --rc genhtml_function_coverage=1 00:04:46.940 --rc genhtml_legend=1 00:04:46.940 --rc geninfo_all_blocks=1 00:04:46.940 --rc geninfo_unexecuted_blocks=1 00:04:46.940 00:04:46.940 ' 00:04:46.940 10:16:00 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:46.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.940 --rc genhtml_branch_coverage=1 00:04:46.940 --rc genhtml_function_coverage=1 00:04:46.940 --rc genhtml_legend=1 00:04:46.940 --rc geninfo_all_blocks=1 00:04:46.940 --rc geninfo_unexecuted_blocks=1 00:04:46.940 00:04:46.940 ' 00:04:46.940 10:16:00 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:46.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.940 --rc genhtml_branch_coverage=1 00:04:46.940 --rc genhtml_function_coverage=1 00:04:46.940 --rc genhtml_legend=1 00:04:46.940 --rc geninfo_all_blocks=1 00:04:46.940 --rc geninfo_unexecuted_blocks=1 00:04:46.940 00:04:46.940 ' 00:04:46.940 10:16:00 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:46.940 OK 00:04:46.940 10:16:00 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:46.940 ************************************ 00:04:46.940 END TEST rpc_client 00:04:46.940 ************************************ 00:04:46.940 00:04:46.940 real 0m0.280s 00:04:46.940 user 0m0.147s 00:04:46.940 sys 0m0.146s 00:04:46.940 10:16:00 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.940 10:16:00 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:46.940 10:16:00 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:46.940 10:16:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.940 10:16:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.940 10:16:00 -- common/autotest_common.sh@10 -- # set +x 00:04:46.940 ************************************ 00:04:46.940 START TEST json_config 00:04:46.940 ************************************ 00:04:46.940 10:16:00 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:47.201 10:16:00 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:47.201 10:16:00 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:47.201 10:16:00 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:47.201 10:16:00 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:47.201 10:16:00 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.201 10:16:00 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.201 10:16:00 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.201 10:16:00 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.201 10:16:00 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.201 10:16:00 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.201 10:16:00 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.201 10:16:00 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.201 10:16:00 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.201 10:16:00 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.201 10:16:00 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.201 10:16:00 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:47.201 10:16:00 json_config -- scripts/common.sh@345 -- # : 1 00:04:47.201 10:16:00 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.201 10:16:00 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.201 10:16:00 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:47.201 10:16:00 json_config -- scripts/common.sh@353 -- # local d=1 00:04:47.201 10:16:00 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.201 10:16:00 json_config -- scripts/common.sh@355 -- # echo 1 00:04:47.201 10:16:00 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.201 10:16:00 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:47.201 10:16:00 json_config -- scripts/common.sh@353 -- # local d=2 00:04:47.201 10:16:00 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.201 10:16:00 json_config -- scripts/common.sh@355 -- # echo 2 00:04:47.201 10:16:00 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.201 10:16:00 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.201 10:16:00 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.201 10:16:00 json_config -- scripts/common.sh@368 -- # return 0 00:04:47.201 10:16:00 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.201 10:16:00 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:47.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.201 --rc genhtml_branch_coverage=1 00:04:47.201 --rc genhtml_function_coverage=1 00:04:47.201 --rc genhtml_legend=1 00:04:47.201 --rc geninfo_all_blocks=1 00:04:47.201 --rc geninfo_unexecuted_blocks=1 00:04:47.201 00:04:47.201 ' 00:04:47.201 10:16:00 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:47.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.201 --rc genhtml_branch_coverage=1 00:04:47.201 --rc genhtml_function_coverage=1 00:04:47.201 --rc genhtml_legend=1 00:04:47.201 --rc geninfo_all_blocks=1 00:04:47.201 --rc geninfo_unexecuted_blocks=1 00:04:47.201 00:04:47.201 ' 00:04:47.201 10:16:00 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:47.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.201 --rc genhtml_branch_coverage=1 00:04:47.201 --rc genhtml_function_coverage=1 00:04:47.201 --rc genhtml_legend=1 00:04:47.201 --rc geninfo_all_blocks=1 00:04:47.201 --rc geninfo_unexecuted_blocks=1 00:04:47.201 00:04:47.201 ' 00:04:47.201 10:16:00 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:47.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.201 --rc genhtml_branch_coverage=1 00:04:47.201 --rc genhtml_function_coverage=1 00:04:47.201 --rc genhtml_legend=1 00:04:47.201 --rc geninfo_all_blocks=1 00:04:47.201 --rc geninfo_unexecuted_blocks=1 00:04:47.201 00:04:47.201 ' 00:04:47.201 10:16:00 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:47.201 10:16:00 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:47.201 10:16:00 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:47.202 10:16:00 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:47.202 10:16:00 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:47.202 10:16:00 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:47.202 10:16:00 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:47.202 10:16:00 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:47.202 10:16:00 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:47.202 10:16:00 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:47.202 10:16:00 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:47.202 10:16:00 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:47.202 10:16:00 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9ccf630-8d77-473d-8904-7d75d98bdf9d 00:04:47.202 10:16:00 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=f9ccf630-8d77-473d-8904-7d75d98bdf9d 00:04:47.202 10:16:00 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:47.202 10:16:00 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:47.202 10:16:00 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:47.202 10:16:00 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:47.202 10:16:00 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:47.202 10:16:00 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:47.202 10:16:00 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:47.202 10:16:00 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:47.202 10:16:00 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:47.202 10:16:00 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.202 10:16:00 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.202 10:16:00 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.202 10:16:00 json_config -- paths/export.sh@5 -- # export PATH 00:04:47.202 10:16:00 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.202 10:16:00 json_config -- nvmf/common.sh@51 -- # : 0 00:04:47.202 10:16:00 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:47.202 10:16:00 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:47.202 10:16:00 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:47.202 10:16:00 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:47.202 10:16:00 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:47.202 10:16:00 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:47.202 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:47.202 10:16:00 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:47.202 10:16:00 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:47.202 10:16:00 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:47.202 10:16:00 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:47.202 10:16:00 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:47.202 10:16:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:47.202 10:16:00 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:47.202 10:16:00 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:47.202 WARNING: No tests are enabled so not running JSON configuration tests 00:04:47.202 10:16:00 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:47.202 10:16:00 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:47.202 00:04:47.202 real 0m0.221s 00:04:47.202 user 0m0.139s 00:04:47.202 sys 0m0.090s 00:04:47.202 10:16:00 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.202 10:16:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.202 ************************************ 00:04:47.202 END TEST json_config 00:04:47.202 ************************************ 00:04:47.463 10:16:00 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:47.463 10:16:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.463 10:16:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.463 10:16:00 -- common/autotest_common.sh@10 -- # set +x 00:04:47.463 ************************************ 00:04:47.463 START TEST json_config_extra_key 00:04:47.463 ************************************ 00:04:47.463 10:16:00 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:47.463 10:16:01 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:47.463 10:16:01 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:47.463 10:16:01 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:47.463 10:16:01 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:47.463 10:16:01 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.463 10:16:01 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.463 10:16:01 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.463 10:16:01 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.463 10:16:01 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.463 10:16:01 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.463 10:16:01 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.463 10:16:01 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.463 10:16:01 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.463 10:16:01 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.463 10:16:01 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.463 10:16:01 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:47.463 10:16:01 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:47.463 10:16:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.463 10:16:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.464 10:16:01 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:47.464 10:16:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:47.464 10:16:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.464 10:16:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:47.464 10:16:01 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.464 10:16:01 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:47.464 10:16:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:47.464 10:16:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.464 10:16:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:47.464 10:16:01 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.464 10:16:01 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.464 10:16:01 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.464 10:16:01 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:47.464 10:16:01 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.464 10:16:01 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:47.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.464 --rc genhtml_branch_coverage=1 00:04:47.464 --rc genhtml_function_coverage=1 00:04:47.464 --rc genhtml_legend=1 00:04:47.464 --rc geninfo_all_blocks=1 00:04:47.464 --rc geninfo_unexecuted_blocks=1 00:04:47.464 00:04:47.464 ' 00:04:47.464 10:16:01 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:47.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.464 --rc genhtml_branch_coverage=1 00:04:47.464 --rc genhtml_function_coverage=1 00:04:47.464 --rc genhtml_legend=1 00:04:47.464 --rc geninfo_all_blocks=1 00:04:47.464 --rc geninfo_unexecuted_blocks=1 00:04:47.464 00:04:47.464 ' 00:04:47.464 10:16:01 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:47.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.464 --rc genhtml_branch_coverage=1 00:04:47.464 --rc genhtml_function_coverage=1 00:04:47.464 --rc genhtml_legend=1 00:04:47.464 --rc geninfo_all_blocks=1 00:04:47.464 --rc geninfo_unexecuted_blocks=1 00:04:47.464 00:04:47.464 ' 00:04:47.464 10:16:01 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:47.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.464 --rc genhtml_branch_coverage=1 00:04:47.464 --rc genhtml_function_coverage=1 00:04:47.464 --rc genhtml_legend=1 00:04:47.464 --rc geninfo_all_blocks=1 00:04:47.464 --rc geninfo_unexecuted_blocks=1 00:04:47.464 00:04:47.464 ' 00:04:47.464 10:16:01 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:47.464 10:16:01 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:47.464 10:16:01 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:47.464 10:16:01 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:47.464 10:16:01 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:47.464 10:16:01 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:47.464 10:16:01 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:47.464 10:16:01 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:47.464 10:16:01 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:47.464 10:16:01 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:47.464 10:16:01 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:47.464 10:16:01 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:47.464 10:16:01 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f9ccf630-8d77-473d-8904-7d75d98bdf9d 00:04:47.464 10:16:01 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=f9ccf630-8d77-473d-8904-7d75d98bdf9d 00:04:47.464 10:16:01 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:47.464 10:16:01 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:47.464 10:16:01 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:47.464 10:16:01 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:47.464 10:16:01 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:47.464 10:16:01 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:47.464 10:16:01 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:47.464 10:16:01 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:47.464 10:16:01 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:47.464 10:16:01 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.464 10:16:01 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.464 10:16:01 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.464 10:16:01 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:47.464 10:16:01 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.464 10:16:01 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:47.464 10:16:01 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:47.464 10:16:01 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:47.464 10:16:01 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:47.464 10:16:01 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:47.464 10:16:01 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:47.464 10:16:01 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:47.464 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:47.464 10:16:01 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:47.464 10:16:01 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:47.464 10:16:01 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:47.464 10:16:01 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:47.464 10:16:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:47.464 10:16:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:47.464 10:16:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:47.464 10:16:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:47.464 10:16:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:47.465 10:16:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:47.465 10:16:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:47.465 10:16:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:47.465 10:16:01 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:47.465 10:16:01 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:47.465 INFO: launching applications... 00:04:47.465 10:16:01 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:47.465 10:16:01 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:47.465 10:16:01 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:47.465 10:16:01 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:47.465 10:16:01 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:47.465 10:16:01 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:47.465 10:16:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:47.465 10:16:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:47.465 10:16:01 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57606 00:04:47.465 10:16:01 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:47.465 Waiting for target to run... 00:04:47.465 10:16:01 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57606 /var/tmp/spdk_tgt.sock 00:04:47.465 10:16:01 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:47.465 10:16:01 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57606 ']' 00:04:47.465 10:16:01 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:47.465 10:16:01 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.465 10:16:01 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:47.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:47.465 10:16:01 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.465 10:16:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:47.725 [2024-11-19 10:16:01.309013] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:47.725 [2024-11-19 10:16:01.309146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57606 ] 00:04:47.985 [2024-11-19 10:16:01.699035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.245 [2024-11-19 10:16:01.801344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.814 10:16:02 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.814 10:16:02 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:48.814 00:04:48.814 10:16:02 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:48.814 INFO: shutting down applications... 00:04:48.814 10:16:02 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:48.814 10:16:02 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:48.814 10:16:02 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:48.814 10:16:02 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:48.814 10:16:02 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57606 ]] 00:04:48.814 10:16:02 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57606 00:04:48.814 10:16:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:48.814 10:16:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.814 10:16:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57606 00:04:48.814 10:16:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:49.384 10:16:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:49.384 10:16:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:49.384 10:16:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57606 00:04:49.384 10:16:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:49.953 10:16:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:49.953 10:16:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:49.953 10:16:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57606 00:04:49.953 10:16:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:50.523 10:16:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:50.523 10:16:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.523 10:16:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57606 00:04:50.523 10:16:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:50.783 10:16:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:50.783 10:16:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.783 10:16:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57606 00:04:50.783 10:16:04 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:51.353 10:16:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:51.353 10:16:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.353 10:16:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57606 00:04:51.353 10:16:05 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:51.923 10:16:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:51.923 10:16:05 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:51.923 10:16:05 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57606 00:04:51.923 10:16:05 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:51.923 10:16:05 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:51.923 10:16:05 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:51.923 SPDK target shutdown done 00:04:51.923 10:16:05 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:51.923 Success 00:04:51.923 10:16:05 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:51.923 00:04:51.923 real 0m4.547s 00:04:51.923 user 0m3.896s 00:04:51.923 sys 0m0.553s 00:04:51.923 10:16:05 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.923 10:16:05 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:51.923 ************************************ 00:04:51.923 END TEST json_config_extra_key 00:04:51.923 ************************************ 00:04:51.923 10:16:05 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:51.923 10:16:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:51.923 10:16:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.923 10:16:05 -- common/autotest_common.sh@10 -- # set +x 00:04:51.923 ************************************ 00:04:51.923 START TEST alias_rpc 00:04:51.923 ************************************ 00:04:51.923 10:16:05 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:51.923 * Looking for test storage... 00:04:52.183 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:52.183 10:16:05 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:52.183 10:16:05 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:52.183 10:16:05 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:52.183 10:16:05 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:52.183 10:16:05 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.183 10:16:05 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.183 10:16:05 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.183 10:16:05 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.183 10:16:05 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.183 10:16:05 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.183 10:16:05 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.183 10:16:05 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.183 10:16:05 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.183 10:16:05 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.183 10:16:05 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.183 10:16:05 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:52.183 10:16:05 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:52.183 10:16:05 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.183 10:16:05 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.183 10:16:05 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:52.183 10:16:05 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:52.183 10:16:05 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.183 10:16:05 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:52.183 10:16:05 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.183 10:16:05 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:52.183 10:16:05 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:52.183 10:16:05 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.183 10:16:05 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:52.183 10:16:05 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.183 10:16:05 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.183 10:16:05 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.183 10:16:05 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:52.183 10:16:05 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.183 10:16:05 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:52.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.183 --rc genhtml_branch_coverage=1 00:04:52.183 --rc genhtml_function_coverage=1 00:04:52.183 --rc genhtml_legend=1 00:04:52.183 --rc geninfo_all_blocks=1 00:04:52.183 --rc geninfo_unexecuted_blocks=1 00:04:52.183 00:04:52.183 ' 00:04:52.183 10:16:05 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:52.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.183 --rc genhtml_branch_coverage=1 00:04:52.183 --rc genhtml_function_coverage=1 00:04:52.183 --rc genhtml_legend=1 00:04:52.183 --rc geninfo_all_blocks=1 00:04:52.183 --rc geninfo_unexecuted_blocks=1 00:04:52.183 00:04:52.183 ' 00:04:52.183 10:16:05 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:52.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.183 --rc genhtml_branch_coverage=1 00:04:52.183 --rc genhtml_function_coverage=1 00:04:52.183 --rc genhtml_legend=1 00:04:52.183 --rc geninfo_all_blocks=1 00:04:52.183 --rc geninfo_unexecuted_blocks=1 00:04:52.183 00:04:52.183 ' 00:04:52.183 10:16:05 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:52.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.183 --rc genhtml_branch_coverage=1 00:04:52.183 --rc genhtml_function_coverage=1 00:04:52.183 --rc genhtml_legend=1 00:04:52.183 --rc geninfo_all_blocks=1 00:04:52.183 --rc geninfo_unexecuted_blocks=1 00:04:52.183 00:04:52.183 ' 00:04:52.183 10:16:05 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:52.183 10:16:05 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57722 00:04:52.183 10:16:05 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57722 00:04:52.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.183 10:16:05 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57722 ']' 00:04:52.183 10:16:05 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.183 10:16:05 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.183 10:16:05 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:52.183 10:16:05 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.183 10:16:05 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.183 10:16:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.183 [2024-11-19 10:16:05.883344] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:52.183 [2024-11-19 10:16:05.883471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57722 ] 00:04:52.443 [2024-11-19 10:16:06.057694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.443 [2024-11-19 10:16:06.166368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.383 10:16:06 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:53.383 10:16:06 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:53.383 10:16:06 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:53.643 10:16:07 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57722 00:04:53.643 10:16:07 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57722 ']' 00:04:53.643 10:16:07 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57722 00:04:53.643 10:16:07 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:53.643 10:16:07 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.643 10:16:07 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57722 00:04:53.643 killing process with pid 57722 00:04:53.644 10:16:07 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:53.644 10:16:07 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:53.644 10:16:07 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57722' 00:04:53.644 10:16:07 alias_rpc -- common/autotest_common.sh@973 -- # kill 57722 00:04:53.644 10:16:07 alias_rpc -- common/autotest_common.sh@978 -- # wait 57722 00:04:56.185 00:04:56.185 real 0m3.917s 00:04:56.185 user 0m3.939s 00:04:56.185 sys 0m0.523s 00:04:56.185 10:16:09 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.185 10:16:09 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.185 ************************************ 00:04:56.185 END TEST alias_rpc 00:04:56.185 ************************************ 00:04:56.185 10:16:09 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:56.185 10:16:09 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:56.185 10:16:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.185 10:16:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.185 10:16:09 -- common/autotest_common.sh@10 -- # set +x 00:04:56.185 ************************************ 00:04:56.185 START TEST spdkcli_tcp 00:04:56.185 ************************************ 00:04:56.185 10:16:09 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:56.185 * Looking for test storage... 00:04:56.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:56.185 10:16:09 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:56.185 10:16:09 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:56.185 10:16:09 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:56.185 10:16:09 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:56.185 10:16:09 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.185 10:16:09 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.185 10:16:09 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.185 10:16:09 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.185 10:16:09 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.185 10:16:09 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.185 10:16:09 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.185 10:16:09 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.185 10:16:09 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.185 10:16:09 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.185 10:16:09 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.185 10:16:09 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:56.185 10:16:09 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:56.185 10:16:09 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.185 10:16:09 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.185 10:16:09 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:56.185 10:16:09 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:56.185 10:16:09 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.185 10:16:09 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:56.185 10:16:09 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.185 10:16:09 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:56.185 10:16:09 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:56.185 10:16:09 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.185 10:16:09 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:56.185 10:16:09 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.185 10:16:09 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.185 10:16:09 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.185 10:16:09 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:56.185 10:16:09 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.185 10:16:09 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:56.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.185 --rc genhtml_branch_coverage=1 00:04:56.185 --rc genhtml_function_coverage=1 00:04:56.185 --rc genhtml_legend=1 00:04:56.185 --rc geninfo_all_blocks=1 00:04:56.185 --rc geninfo_unexecuted_blocks=1 00:04:56.185 00:04:56.185 ' 00:04:56.185 10:16:09 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:56.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.185 --rc genhtml_branch_coverage=1 00:04:56.185 --rc genhtml_function_coverage=1 00:04:56.185 --rc genhtml_legend=1 00:04:56.185 --rc geninfo_all_blocks=1 00:04:56.185 --rc geninfo_unexecuted_blocks=1 00:04:56.185 00:04:56.185 ' 00:04:56.185 10:16:09 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:56.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.185 --rc genhtml_branch_coverage=1 00:04:56.185 --rc genhtml_function_coverage=1 00:04:56.185 --rc genhtml_legend=1 00:04:56.185 --rc geninfo_all_blocks=1 00:04:56.185 --rc geninfo_unexecuted_blocks=1 00:04:56.185 00:04:56.185 ' 00:04:56.185 10:16:09 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:56.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.185 --rc genhtml_branch_coverage=1 00:04:56.185 --rc genhtml_function_coverage=1 00:04:56.185 --rc genhtml_legend=1 00:04:56.185 --rc geninfo_all_blocks=1 00:04:56.185 --rc geninfo_unexecuted_blocks=1 00:04:56.185 00:04:56.185 ' 00:04:56.185 10:16:09 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:56.185 10:16:09 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:56.185 10:16:09 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:56.185 10:16:09 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:56.185 10:16:09 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:56.185 10:16:09 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:56.185 10:16:09 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:56.185 10:16:09 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:56.185 10:16:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.185 10:16:09 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57825 00:04:56.185 10:16:09 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:56.185 10:16:09 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57825 00:04:56.185 10:16:09 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57825 ']' 00:04:56.185 10:16:09 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.185 10:16:09 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.186 10:16:09 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.186 10:16:09 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.186 10:16:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.186 [2024-11-19 10:16:09.895947] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:04:56.186 [2024-11-19 10:16:09.896069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57825 ] 00:04:56.446 [2024-11-19 10:16:10.071060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:56.446 [2024-11-19 10:16:10.185871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.446 [2024-11-19 10:16:10.185923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.382 10:16:10 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.382 10:16:10 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:57.382 10:16:10 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:57.382 10:16:10 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57842 00:04:57.382 10:16:10 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:57.643 [ 00:04:57.643 "bdev_malloc_delete", 00:04:57.643 "bdev_malloc_create", 00:04:57.643 "bdev_null_resize", 00:04:57.643 "bdev_null_delete", 00:04:57.643 "bdev_null_create", 00:04:57.643 "bdev_nvme_cuse_unregister", 00:04:57.643 "bdev_nvme_cuse_register", 00:04:57.643 "bdev_opal_new_user", 00:04:57.643 "bdev_opal_set_lock_state", 00:04:57.643 "bdev_opal_delete", 00:04:57.643 "bdev_opal_get_info", 00:04:57.643 "bdev_opal_create", 00:04:57.643 "bdev_nvme_opal_revert", 00:04:57.643 "bdev_nvme_opal_init", 00:04:57.643 "bdev_nvme_send_cmd", 00:04:57.643 "bdev_nvme_set_keys", 00:04:57.643 "bdev_nvme_get_path_iostat", 00:04:57.643 "bdev_nvme_get_mdns_discovery_info", 00:04:57.643 "bdev_nvme_stop_mdns_discovery", 00:04:57.643 "bdev_nvme_start_mdns_discovery", 00:04:57.643 "bdev_nvme_set_multipath_policy", 00:04:57.643 "bdev_nvme_set_preferred_path", 00:04:57.643 "bdev_nvme_get_io_paths", 00:04:57.643 "bdev_nvme_remove_error_injection", 00:04:57.643 "bdev_nvme_add_error_injection", 00:04:57.643 "bdev_nvme_get_discovery_info", 00:04:57.643 "bdev_nvme_stop_discovery", 00:04:57.643 "bdev_nvme_start_discovery", 00:04:57.643 "bdev_nvme_get_controller_health_info", 00:04:57.643 "bdev_nvme_disable_controller", 00:04:57.643 "bdev_nvme_enable_controller", 00:04:57.643 "bdev_nvme_reset_controller", 00:04:57.643 "bdev_nvme_get_transport_statistics", 00:04:57.643 "bdev_nvme_apply_firmware", 00:04:57.643 "bdev_nvme_detach_controller", 00:04:57.643 "bdev_nvme_get_controllers", 00:04:57.643 "bdev_nvme_attach_controller", 00:04:57.643 "bdev_nvme_set_hotplug", 00:04:57.643 "bdev_nvme_set_options", 00:04:57.643 "bdev_passthru_delete", 00:04:57.643 "bdev_passthru_create", 00:04:57.643 "bdev_lvol_set_parent_bdev", 00:04:57.643 "bdev_lvol_set_parent", 00:04:57.643 "bdev_lvol_check_shallow_copy", 00:04:57.643 "bdev_lvol_start_shallow_copy", 00:04:57.643 "bdev_lvol_grow_lvstore", 00:04:57.643 "bdev_lvol_get_lvols", 00:04:57.643 "bdev_lvol_get_lvstores", 00:04:57.643 "bdev_lvol_delete", 00:04:57.643 "bdev_lvol_set_read_only", 00:04:57.643 "bdev_lvol_resize", 00:04:57.643 "bdev_lvol_decouple_parent", 00:04:57.643 "bdev_lvol_inflate", 00:04:57.643 "bdev_lvol_rename", 00:04:57.643 "bdev_lvol_clone_bdev", 00:04:57.643 "bdev_lvol_clone", 00:04:57.643 "bdev_lvol_snapshot", 00:04:57.643 "bdev_lvol_create", 00:04:57.643 "bdev_lvol_delete_lvstore", 00:04:57.643 "bdev_lvol_rename_lvstore", 00:04:57.643 "bdev_lvol_create_lvstore", 00:04:57.643 "bdev_raid_set_options", 00:04:57.643 "bdev_raid_remove_base_bdev", 00:04:57.643 "bdev_raid_add_base_bdev", 00:04:57.643 "bdev_raid_delete", 00:04:57.643 "bdev_raid_create", 00:04:57.643 "bdev_raid_get_bdevs", 00:04:57.643 "bdev_error_inject_error", 00:04:57.643 "bdev_error_delete", 00:04:57.643 "bdev_error_create", 00:04:57.643 "bdev_split_delete", 00:04:57.643 "bdev_split_create", 00:04:57.643 "bdev_delay_delete", 00:04:57.643 "bdev_delay_create", 00:04:57.643 "bdev_delay_update_latency", 00:04:57.643 "bdev_zone_block_delete", 00:04:57.643 "bdev_zone_block_create", 00:04:57.643 "blobfs_create", 00:04:57.643 "blobfs_detect", 00:04:57.643 "blobfs_set_cache_size", 00:04:57.643 "bdev_aio_delete", 00:04:57.643 "bdev_aio_rescan", 00:04:57.643 "bdev_aio_create", 00:04:57.643 "bdev_ftl_set_property", 00:04:57.643 "bdev_ftl_get_properties", 00:04:57.643 "bdev_ftl_get_stats", 00:04:57.643 "bdev_ftl_unmap", 00:04:57.643 "bdev_ftl_unload", 00:04:57.643 "bdev_ftl_delete", 00:04:57.643 "bdev_ftl_load", 00:04:57.643 "bdev_ftl_create", 00:04:57.643 "bdev_virtio_attach_controller", 00:04:57.643 "bdev_virtio_scsi_get_devices", 00:04:57.643 "bdev_virtio_detach_controller", 00:04:57.643 "bdev_virtio_blk_set_hotplug", 00:04:57.643 "bdev_iscsi_delete", 00:04:57.643 "bdev_iscsi_create", 00:04:57.643 "bdev_iscsi_set_options", 00:04:57.643 "accel_error_inject_error", 00:04:57.643 "ioat_scan_accel_module", 00:04:57.643 "dsa_scan_accel_module", 00:04:57.643 "iaa_scan_accel_module", 00:04:57.643 "keyring_file_remove_key", 00:04:57.643 "keyring_file_add_key", 00:04:57.643 "keyring_linux_set_options", 00:04:57.643 "fsdev_aio_delete", 00:04:57.643 "fsdev_aio_create", 00:04:57.643 "iscsi_get_histogram", 00:04:57.643 "iscsi_enable_histogram", 00:04:57.643 "iscsi_set_options", 00:04:57.643 "iscsi_get_auth_groups", 00:04:57.643 "iscsi_auth_group_remove_secret", 00:04:57.643 "iscsi_auth_group_add_secret", 00:04:57.643 "iscsi_delete_auth_group", 00:04:57.643 "iscsi_create_auth_group", 00:04:57.643 "iscsi_set_discovery_auth", 00:04:57.643 "iscsi_get_options", 00:04:57.643 "iscsi_target_node_request_logout", 00:04:57.643 "iscsi_target_node_set_redirect", 00:04:57.643 "iscsi_target_node_set_auth", 00:04:57.643 "iscsi_target_node_add_lun", 00:04:57.643 "iscsi_get_stats", 00:04:57.643 "iscsi_get_connections", 00:04:57.643 "iscsi_portal_group_set_auth", 00:04:57.643 "iscsi_start_portal_group", 00:04:57.643 "iscsi_delete_portal_group", 00:04:57.643 "iscsi_create_portal_group", 00:04:57.643 "iscsi_get_portal_groups", 00:04:57.643 "iscsi_delete_target_node", 00:04:57.643 "iscsi_target_node_remove_pg_ig_maps", 00:04:57.643 "iscsi_target_node_add_pg_ig_maps", 00:04:57.643 "iscsi_create_target_node", 00:04:57.643 "iscsi_get_target_nodes", 00:04:57.643 "iscsi_delete_initiator_group", 00:04:57.643 "iscsi_initiator_group_remove_initiators", 00:04:57.643 "iscsi_initiator_group_add_initiators", 00:04:57.643 "iscsi_create_initiator_group", 00:04:57.643 "iscsi_get_initiator_groups", 00:04:57.643 "nvmf_set_crdt", 00:04:57.643 "nvmf_set_config", 00:04:57.643 "nvmf_set_max_subsystems", 00:04:57.643 "nvmf_stop_mdns_prr", 00:04:57.644 "nvmf_publish_mdns_prr", 00:04:57.644 "nvmf_subsystem_get_listeners", 00:04:57.644 "nvmf_subsystem_get_qpairs", 00:04:57.644 "nvmf_subsystem_get_controllers", 00:04:57.644 "nvmf_get_stats", 00:04:57.644 "nvmf_get_transports", 00:04:57.644 "nvmf_create_transport", 00:04:57.644 "nvmf_get_targets", 00:04:57.644 "nvmf_delete_target", 00:04:57.644 "nvmf_create_target", 00:04:57.644 "nvmf_subsystem_allow_any_host", 00:04:57.644 "nvmf_subsystem_set_keys", 00:04:57.644 "nvmf_subsystem_remove_host", 00:04:57.644 "nvmf_subsystem_add_host", 00:04:57.644 "nvmf_ns_remove_host", 00:04:57.644 "nvmf_ns_add_host", 00:04:57.644 "nvmf_subsystem_remove_ns", 00:04:57.644 "nvmf_subsystem_set_ns_ana_group", 00:04:57.644 "nvmf_subsystem_add_ns", 00:04:57.644 "nvmf_subsystem_listener_set_ana_state", 00:04:57.644 "nvmf_discovery_get_referrals", 00:04:57.644 "nvmf_discovery_remove_referral", 00:04:57.644 "nvmf_discovery_add_referral", 00:04:57.644 "nvmf_subsystem_remove_listener", 00:04:57.644 "nvmf_subsystem_add_listener", 00:04:57.644 "nvmf_delete_subsystem", 00:04:57.644 "nvmf_create_subsystem", 00:04:57.644 "nvmf_get_subsystems", 00:04:57.644 "env_dpdk_get_mem_stats", 00:04:57.644 "nbd_get_disks", 00:04:57.644 "nbd_stop_disk", 00:04:57.644 "nbd_start_disk", 00:04:57.644 "ublk_recover_disk", 00:04:57.644 "ublk_get_disks", 00:04:57.644 "ublk_stop_disk", 00:04:57.644 "ublk_start_disk", 00:04:57.644 "ublk_destroy_target", 00:04:57.644 "ublk_create_target", 00:04:57.644 "virtio_blk_create_transport", 00:04:57.644 "virtio_blk_get_transports", 00:04:57.644 "vhost_controller_set_coalescing", 00:04:57.644 "vhost_get_controllers", 00:04:57.644 "vhost_delete_controller", 00:04:57.644 "vhost_create_blk_controller", 00:04:57.644 "vhost_scsi_controller_remove_target", 00:04:57.644 "vhost_scsi_controller_add_target", 00:04:57.644 "vhost_start_scsi_controller", 00:04:57.644 "vhost_create_scsi_controller", 00:04:57.644 "thread_set_cpumask", 00:04:57.644 "scheduler_set_options", 00:04:57.644 "framework_get_governor", 00:04:57.644 "framework_get_scheduler", 00:04:57.644 "framework_set_scheduler", 00:04:57.644 "framework_get_reactors", 00:04:57.644 "thread_get_io_channels", 00:04:57.644 "thread_get_pollers", 00:04:57.644 "thread_get_stats", 00:04:57.644 "framework_monitor_context_switch", 00:04:57.644 "spdk_kill_instance", 00:04:57.644 "log_enable_timestamps", 00:04:57.644 "log_get_flags", 00:04:57.644 "log_clear_flag", 00:04:57.644 "log_set_flag", 00:04:57.644 "log_get_level", 00:04:57.644 "log_set_level", 00:04:57.644 "log_get_print_level", 00:04:57.644 "log_set_print_level", 00:04:57.644 "framework_enable_cpumask_locks", 00:04:57.644 "framework_disable_cpumask_locks", 00:04:57.644 "framework_wait_init", 00:04:57.644 "framework_start_init", 00:04:57.644 "scsi_get_devices", 00:04:57.644 "bdev_get_histogram", 00:04:57.644 "bdev_enable_histogram", 00:04:57.644 "bdev_set_qos_limit", 00:04:57.644 "bdev_set_qd_sampling_period", 00:04:57.644 "bdev_get_bdevs", 00:04:57.644 "bdev_reset_iostat", 00:04:57.644 "bdev_get_iostat", 00:04:57.644 "bdev_examine", 00:04:57.644 "bdev_wait_for_examine", 00:04:57.644 "bdev_set_options", 00:04:57.644 "accel_get_stats", 00:04:57.644 "accel_set_options", 00:04:57.644 "accel_set_driver", 00:04:57.644 "accel_crypto_key_destroy", 00:04:57.644 "accel_crypto_keys_get", 00:04:57.644 "accel_crypto_key_create", 00:04:57.644 "accel_assign_opc", 00:04:57.644 "accel_get_module_info", 00:04:57.644 "accel_get_opc_assignments", 00:04:57.644 "vmd_rescan", 00:04:57.644 "vmd_remove_device", 00:04:57.644 "vmd_enable", 00:04:57.644 "sock_get_default_impl", 00:04:57.644 "sock_set_default_impl", 00:04:57.644 "sock_impl_set_options", 00:04:57.644 "sock_impl_get_options", 00:04:57.644 "iobuf_get_stats", 00:04:57.644 "iobuf_set_options", 00:04:57.644 "keyring_get_keys", 00:04:57.644 "framework_get_pci_devices", 00:04:57.644 "framework_get_config", 00:04:57.644 "framework_get_subsystems", 00:04:57.644 "fsdev_set_opts", 00:04:57.644 "fsdev_get_opts", 00:04:57.644 "trace_get_info", 00:04:57.644 "trace_get_tpoint_group_mask", 00:04:57.644 "trace_disable_tpoint_group", 00:04:57.644 "trace_enable_tpoint_group", 00:04:57.644 "trace_clear_tpoint_mask", 00:04:57.644 "trace_set_tpoint_mask", 00:04:57.644 "notify_get_notifications", 00:04:57.644 "notify_get_types", 00:04:57.644 "spdk_get_version", 00:04:57.644 "rpc_get_methods" 00:04:57.644 ] 00:04:57.644 10:16:11 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:57.644 10:16:11 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:57.644 10:16:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:57.644 10:16:11 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:57.644 10:16:11 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57825 00:04:57.644 10:16:11 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57825 ']' 00:04:57.644 10:16:11 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57825 00:04:57.644 10:16:11 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:57.644 10:16:11 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.644 10:16:11 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57825 00:04:57.644 10:16:11 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.644 10:16:11 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.644 10:16:11 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57825' 00:04:57.644 killing process with pid 57825 00:04:57.644 10:16:11 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57825 00:04:57.644 10:16:11 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57825 00:05:00.176 00:05:00.176 real 0m4.015s 00:05:00.176 user 0m7.141s 00:05:00.176 sys 0m0.620s 00:05:00.176 10:16:13 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.176 10:16:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:00.176 ************************************ 00:05:00.176 END TEST spdkcli_tcp 00:05:00.176 ************************************ 00:05:00.176 10:16:13 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:00.176 10:16:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.176 10:16:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.176 10:16:13 -- common/autotest_common.sh@10 -- # set +x 00:05:00.176 ************************************ 00:05:00.176 START TEST dpdk_mem_utility 00:05:00.176 ************************************ 00:05:00.176 10:16:13 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:00.176 * Looking for test storage... 00:05:00.176 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:00.176 10:16:13 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:00.176 10:16:13 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:00.176 10:16:13 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:00.176 10:16:13 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:00.176 10:16:13 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.176 10:16:13 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.176 10:16:13 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.176 10:16:13 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.176 10:16:13 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.176 10:16:13 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.176 10:16:13 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.176 10:16:13 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.176 10:16:13 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.176 10:16:13 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.176 10:16:13 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.176 10:16:13 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:00.176 10:16:13 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:00.176 10:16:13 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.176 10:16:13 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.176 10:16:13 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:00.176 10:16:13 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:00.176 10:16:13 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.176 10:16:13 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:00.176 10:16:13 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.176 10:16:13 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:00.176 10:16:13 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:00.176 10:16:13 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.176 10:16:13 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:00.176 10:16:13 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.176 10:16:13 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.176 10:16:13 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.176 10:16:13 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:00.176 10:16:13 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.176 10:16:13 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:00.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.176 --rc genhtml_branch_coverage=1 00:05:00.176 --rc genhtml_function_coverage=1 00:05:00.176 --rc genhtml_legend=1 00:05:00.176 --rc geninfo_all_blocks=1 00:05:00.176 --rc geninfo_unexecuted_blocks=1 00:05:00.176 00:05:00.176 ' 00:05:00.176 10:16:13 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:00.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.176 --rc genhtml_branch_coverage=1 00:05:00.176 --rc genhtml_function_coverage=1 00:05:00.176 --rc genhtml_legend=1 00:05:00.176 --rc geninfo_all_blocks=1 00:05:00.176 --rc geninfo_unexecuted_blocks=1 00:05:00.176 00:05:00.176 ' 00:05:00.176 10:16:13 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:00.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.176 --rc genhtml_branch_coverage=1 00:05:00.176 --rc genhtml_function_coverage=1 00:05:00.176 --rc genhtml_legend=1 00:05:00.176 --rc geninfo_all_blocks=1 00:05:00.176 --rc geninfo_unexecuted_blocks=1 00:05:00.176 00:05:00.176 ' 00:05:00.176 10:16:13 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:00.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.176 --rc genhtml_branch_coverage=1 00:05:00.176 --rc genhtml_function_coverage=1 00:05:00.176 --rc genhtml_legend=1 00:05:00.176 --rc geninfo_all_blocks=1 00:05:00.176 --rc geninfo_unexecuted_blocks=1 00:05:00.176 00:05:00.176 ' 00:05:00.176 10:16:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:00.176 10:16:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.176 10:16:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57947 00:05:00.176 10:16:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57947 00:05:00.176 10:16:13 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57947 ']' 00:05:00.176 10:16:13 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.176 10:16:13 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.176 10:16:13 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.176 10:16:13 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.176 10:16:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:00.434 [2024-11-19 10:16:13.969857] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:00.434 [2024-11-19 10:16:13.969979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57947 ] 00:05:00.434 [2024-11-19 10:16:14.142936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.692 [2024-11-19 10:16:14.260272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.632 10:16:15 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.632 10:16:15 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:01.632 10:16:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:01.632 10:16:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:01.632 10:16:15 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.632 10:16:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:01.632 { 00:05:01.632 "filename": "/tmp/spdk_mem_dump.txt" 00:05:01.632 } 00:05:01.632 10:16:15 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.632 10:16:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:01.632 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:01.632 1 heaps totaling size 816.000000 MiB 00:05:01.632 size: 816.000000 MiB heap id: 0 00:05:01.632 end heaps---------- 00:05:01.632 9 mempools totaling size 595.772034 MiB 00:05:01.632 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:01.632 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:01.632 size: 92.545471 MiB name: bdev_io_57947 00:05:01.632 size: 50.003479 MiB name: msgpool_57947 00:05:01.632 size: 36.509338 MiB name: fsdev_io_57947 00:05:01.632 size: 21.763794 MiB name: PDU_Pool 00:05:01.632 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:01.632 size: 4.133484 MiB name: evtpool_57947 00:05:01.632 size: 0.026123 MiB name: Session_Pool 00:05:01.632 end mempools------- 00:05:01.632 6 memzones totaling size 4.142822 MiB 00:05:01.632 size: 1.000366 MiB name: RG_ring_0_57947 00:05:01.632 size: 1.000366 MiB name: RG_ring_1_57947 00:05:01.632 size: 1.000366 MiB name: RG_ring_4_57947 00:05:01.632 size: 1.000366 MiB name: RG_ring_5_57947 00:05:01.632 size: 0.125366 MiB name: RG_ring_2_57947 00:05:01.632 size: 0.015991 MiB name: RG_ring_3_57947 00:05:01.632 end memzones------- 00:05:01.632 10:16:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:01.632 heap id: 0 total size: 816.000000 MiB number of busy elements: 313 number of free elements: 18 00:05:01.632 list of free elements. size: 16.791870 MiB 00:05:01.632 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:01.632 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:01.632 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:01.632 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:01.632 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:01.633 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:01.633 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:01.633 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:01.633 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:01.633 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:01.633 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:01.633 element at address: 0x20001ac00000 with size: 0.562439 MiB 00:05:01.633 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:01.633 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:01.633 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:01.633 element at address: 0x200012c00000 with size: 0.443237 MiB 00:05:01.633 element at address: 0x200028000000 with size: 0.390442 MiB 00:05:01.633 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:01.633 list of standard malloc elements. size: 199.287231 MiB 00:05:01.633 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:01.633 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:01.633 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:01.633 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:01.633 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:01.633 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:01.633 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:01.633 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:01.633 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:01.633 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:01.633 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:01.633 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:01.633 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:01.633 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200012c71780 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:01.633 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:01.634 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:01.634 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:01.634 element at address: 0x200028063f40 with size: 0.000244 MiB 00:05:01.634 element at address: 0x200028064040 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806af80 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806b080 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806b180 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806b280 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806b380 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:01.634 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:01.635 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:01.635 list of memzone associated elements. size: 599.920898 MiB 00:05:01.635 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:01.635 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:01.635 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:01.635 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:01.635 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:01.635 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57947_0 00:05:01.635 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:01.635 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57947_0 00:05:01.635 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:01.635 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57947_0 00:05:01.635 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:01.635 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:01.635 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:01.635 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:01.635 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:01.635 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57947_0 00:05:01.635 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:01.635 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57947 00:05:01.635 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:01.635 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57947 00:05:01.635 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:01.635 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:01.635 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:01.635 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:01.635 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:01.635 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:01.635 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:01.635 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:01.635 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:01.635 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57947 00:05:01.635 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:01.635 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57947 00:05:01.635 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:01.635 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57947 00:05:01.635 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:01.635 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57947 00:05:01.635 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:01.635 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57947 00:05:01.635 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:01.635 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57947 00:05:01.635 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:01.635 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:01.635 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:01.635 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:01.635 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:01.635 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:01.635 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:01.635 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57947 00:05:01.635 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:01.635 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57947 00:05:01.635 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:01.635 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:01.635 element at address: 0x200028064140 with size: 0.023804 MiB 00:05:01.635 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:01.635 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:01.635 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57947 00:05:01.635 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:05:01.635 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:01.635 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:01.635 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57947 00:05:01.635 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:01.635 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57947 00:05:01.635 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:01.635 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57947 00:05:01.635 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:05:01.635 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:01.635 10:16:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:01.635 10:16:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57947 00:05:01.635 10:16:15 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57947 ']' 00:05:01.635 10:16:15 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57947 00:05:01.635 10:16:15 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:01.635 10:16:15 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.635 10:16:15 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57947 00:05:01.635 10:16:15 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.635 10:16:15 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.635 killing process with pid 57947 00:05:01.635 10:16:15 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57947' 00:05:01.635 10:16:15 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57947 00:05:01.635 10:16:15 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57947 00:05:04.194 00:05:04.194 real 0m3.899s 00:05:04.194 user 0m3.806s 00:05:04.194 sys 0m0.539s 00:05:04.194 10:16:17 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.194 10:16:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:04.194 ************************************ 00:05:04.194 END TEST dpdk_mem_utility 00:05:04.194 ************************************ 00:05:04.194 10:16:17 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:04.194 10:16:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.194 10:16:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.194 10:16:17 -- common/autotest_common.sh@10 -- # set +x 00:05:04.194 ************************************ 00:05:04.194 START TEST event 00:05:04.194 ************************************ 00:05:04.194 10:16:17 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:04.194 * Looking for test storage... 00:05:04.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:04.194 10:16:17 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:04.194 10:16:17 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:04.194 10:16:17 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:04.194 10:16:17 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:04.194 10:16:17 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:04.194 10:16:17 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:04.194 10:16:17 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:04.194 10:16:17 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:04.194 10:16:17 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:04.194 10:16:17 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:04.194 10:16:17 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:04.194 10:16:17 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:04.194 10:16:17 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:04.194 10:16:17 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:04.194 10:16:17 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:04.194 10:16:17 event -- scripts/common.sh@344 -- # case "$op" in 00:05:04.194 10:16:17 event -- scripts/common.sh@345 -- # : 1 00:05:04.194 10:16:17 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:04.194 10:16:17 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:04.194 10:16:17 event -- scripts/common.sh@365 -- # decimal 1 00:05:04.194 10:16:17 event -- scripts/common.sh@353 -- # local d=1 00:05:04.194 10:16:17 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:04.194 10:16:17 event -- scripts/common.sh@355 -- # echo 1 00:05:04.194 10:16:17 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:04.194 10:16:17 event -- scripts/common.sh@366 -- # decimal 2 00:05:04.194 10:16:17 event -- scripts/common.sh@353 -- # local d=2 00:05:04.194 10:16:17 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:04.194 10:16:17 event -- scripts/common.sh@355 -- # echo 2 00:05:04.194 10:16:17 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:04.194 10:16:17 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:04.194 10:16:17 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:04.194 10:16:17 event -- scripts/common.sh@368 -- # return 0 00:05:04.194 10:16:17 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:04.194 10:16:17 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:04.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.194 --rc genhtml_branch_coverage=1 00:05:04.194 --rc genhtml_function_coverage=1 00:05:04.194 --rc genhtml_legend=1 00:05:04.194 --rc geninfo_all_blocks=1 00:05:04.194 --rc geninfo_unexecuted_blocks=1 00:05:04.194 00:05:04.194 ' 00:05:04.194 10:16:17 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:04.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.194 --rc genhtml_branch_coverage=1 00:05:04.194 --rc genhtml_function_coverage=1 00:05:04.194 --rc genhtml_legend=1 00:05:04.194 --rc geninfo_all_blocks=1 00:05:04.194 --rc geninfo_unexecuted_blocks=1 00:05:04.194 00:05:04.194 ' 00:05:04.194 10:16:17 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:04.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.194 --rc genhtml_branch_coverage=1 00:05:04.194 --rc genhtml_function_coverage=1 00:05:04.194 --rc genhtml_legend=1 00:05:04.194 --rc geninfo_all_blocks=1 00:05:04.194 --rc geninfo_unexecuted_blocks=1 00:05:04.194 00:05:04.194 ' 00:05:04.194 10:16:17 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:04.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:04.194 --rc genhtml_branch_coverage=1 00:05:04.194 --rc genhtml_function_coverage=1 00:05:04.194 --rc genhtml_legend=1 00:05:04.194 --rc geninfo_all_blocks=1 00:05:04.194 --rc geninfo_unexecuted_blocks=1 00:05:04.194 00:05:04.194 ' 00:05:04.194 10:16:17 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:04.194 10:16:17 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:04.194 10:16:17 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:04.194 10:16:17 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:04.194 10:16:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.194 10:16:17 event -- common/autotest_common.sh@10 -- # set +x 00:05:04.194 ************************************ 00:05:04.194 START TEST event_perf 00:05:04.194 ************************************ 00:05:04.194 10:16:17 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:04.194 Running I/O for 1 seconds...[2024-11-19 10:16:17.872222] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:04.194 [2024-11-19 10:16:17.872325] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58055 ] 00:05:04.452 [2024-11-19 10:16:18.047437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:04.452 [2024-11-19 10:16:18.155634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.452 [2024-11-19 10:16:18.155804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.452 [2024-11-19 10:16:18.156525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.452 Running I/O for 1 seconds...[2024-11-19 10:16:18.156569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:05.859 00:05:05.859 lcore 0: 211904 00:05:05.859 lcore 1: 211904 00:05:05.859 lcore 2: 211904 00:05:05.859 lcore 3: 211904 00:05:05.859 done. 00:05:05.859 00:05:05.859 real 0m1.561s 00:05:05.859 user 0m4.340s 00:05:05.859 sys 0m0.102s 00:05:05.859 10:16:19 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.859 10:16:19 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:05.859 ************************************ 00:05:05.859 END TEST event_perf 00:05:05.859 ************************************ 00:05:05.859 10:16:19 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:05.859 10:16:19 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:05.859 10:16:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.859 10:16:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:05.859 ************************************ 00:05:05.859 START TEST event_reactor 00:05:05.859 ************************************ 00:05:05.859 10:16:19 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:05.859 [2024-11-19 10:16:19.495618] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:05.859 [2024-11-19 10:16:19.495724] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58093 ] 00:05:06.118 [2024-11-19 10:16:19.669393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.118 [2024-11-19 10:16:19.771442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.495 test_start 00:05:07.495 oneshot 00:05:07.495 tick 100 00:05:07.495 tick 100 00:05:07.495 tick 250 00:05:07.495 tick 100 00:05:07.495 tick 100 00:05:07.495 tick 100 00:05:07.495 tick 250 00:05:07.495 tick 500 00:05:07.495 tick 100 00:05:07.495 tick 100 00:05:07.495 tick 250 00:05:07.495 tick 100 00:05:07.495 tick 100 00:05:07.495 test_end 00:05:07.495 00:05:07.495 real 0m1.547s 00:05:07.495 user 0m1.350s 00:05:07.495 sys 0m0.090s 00:05:07.495 10:16:20 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.495 10:16:20 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:07.495 ************************************ 00:05:07.495 END TEST event_reactor 00:05:07.495 ************************************ 00:05:07.495 10:16:21 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:07.495 10:16:21 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:07.495 10:16:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.495 10:16:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:07.495 ************************************ 00:05:07.495 START TEST event_reactor_perf 00:05:07.495 ************************************ 00:05:07.495 10:16:21 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:07.495 [2024-11-19 10:16:21.103782] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:07.495 [2024-11-19 10:16:21.103905] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58131 ] 00:05:07.754 [2024-11-19 10:16:21.275337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.754 [2024-11-19 10:16:21.384378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.131 test_start 00:05:09.131 test_end 00:05:09.131 Performance: 385891 events per second 00:05:09.131 00:05:09.131 real 0m1.555s 00:05:09.131 user 0m1.362s 00:05:09.131 sys 0m0.086s 00:05:09.131 10:16:22 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.131 10:16:22 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:09.131 ************************************ 00:05:09.131 END TEST event_reactor_perf 00:05:09.131 ************************************ 00:05:09.131 10:16:22 event -- event/event.sh@49 -- # uname -s 00:05:09.131 10:16:22 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:09.132 10:16:22 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:09.132 10:16:22 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.132 10:16:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.132 10:16:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.132 ************************************ 00:05:09.132 START TEST event_scheduler 00:05:09.132 ************************************ 00:05:09.132 10:16:22 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:09.132 * Looking for test storage... 00:05:09.132 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:09.132 10:16:22 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:09.132 10:16:22 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:09.132 10:16:22 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:09.132 10:16:22 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:09.132 10:16:22 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.132 10:16:22 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.132 10:16:22 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.132 10:16:22 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.132 10:16:22 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.132 10:16:22 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.132 10:16:22 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.132 10:16:22 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.132 10:16:22 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.132 10:16:22 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.132 10:16:22 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.132 10:16:22 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:09.132 10:16:22 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:09.132 10:16:22 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.132 10:16:22 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.132 10:16:22 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:09.132 10:16:22 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:09.132 10:16:22 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.132 10:16:22 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:09.132 10:16:22 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.132 10:16:22 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:09.132 10:16:22 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:09.132 10:16:22 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.132 10:16:22 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:09.132 10:16:22 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.132 10:16:22 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.132 10:16:22 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.132 10:16:22 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:09.132 10:16:22 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.132 10:16:22 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:09.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.132 --rc genhtml_branch_coverage=1 00:05:09.132 --rc genhtml_function_coverage=1 00:05:09.132 --rc genhtml_legend=1 00:05:09.132 --rc geninfo_all_blocks=1 00:05:09.132 --rc geninfo_unexecuted_blocks=1 00:05:09.132 00:05:09.132 ' 00:05:09.132 10:16:22 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:09.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.132 --rc genhtml_branch_coverage=1 00:05:09.132 --rc genhtml_function_coverage=1 00:05:09.132 --rc genhtml_legend=1 00:05:09.132 --rc geninfo_all_blocks=1 00:05:09.132 --rc geninfo_unexecuted_blocks=1 00:05:09.132 00:05:09.132 ' 00:05:09.132 10:16:22 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:09.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.132 --rc genhtml_branch_coverage=1 00:05:09.132 --rc genhtml_function_coverage=1 00:05:09.132 --rc genhtml_legend=1 00:05:09.132 --rc geninfo_all_blocks=1 00:05:09.132 --rc geninfo_unexecuted_blocks=1 00:05:09.132 00:05:09.132 ' 00:05:09.132 10:16:22 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:09.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.132 --rc genhtml_branch_coverage=1 00:05:09.132 --rc genhtml_function_coverage=1 00:05:09.132 --rc genhtml_legend=1 00:05:09.132 --rc geninfo_all_blocks=1 00:05:09.132 --rc geninfo_unexecuted_blocks=1 00:05:09.132 00:05:09.132 ' 00:05:09.132 10:16:22 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:09.391 10:16:22 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58206 00:05:09.391 10:16:22 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:09.391 10:16:22 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.391 10:16:22 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58206 00:05:09.391 10:16:22 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58206 ']' 00:05:09.391 10:16:22 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.391 10:16:22 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.391 10:16:22 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.391 10:16:22 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.391 10:16:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.391 [2024-11-19 10:16:22.998992] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:09.391 [2024-11-19 10:16:22.999139] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58206 ] 00:05:09.649 [2024-11-19 10:16:23.175408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:09.650 [2024-11-19 10:16:23.296404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.650 [2024-11-19 10:16:23.296607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.650 [2024-11-19 10:16:23.297370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:09.650 [2024-11-19 10:16:23.297397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:10.216 10:16:23 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.216 10:16:23 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:10.216 10:16:23 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:10.216 10:16:23 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.216 10:16:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.216 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:10.216 POWER: Cannot set governor of lcore 0 to userspace 00:05:10.216 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:10.216 POWER: Cannot set governor of lcore 0 to performance 00:05:10.216 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:10.216 POWER: Cannot set governor of lcore 0 to userspace 00:05:10.216 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:10.216 POWER: Cannot set governor of lcore 0 to userspace 00:05:10.216 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:10.216 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:10.216 POWER: Unable to set Power Management Environment for lcore 0 00:05:10.216 [2024-11-19 10:16:23.841772] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:10.216 [2024-11-19 10:16:23.841795] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:10.216 [2024-11-19 10:16:23.841807] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:10.216 [2024-11-19 10:16:23.841827] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:10.216 [2024-11-19 10:16:23.841836] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:10.216 [2024-11-19 10:16:23.841846] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:10.216 10:16:23 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.216 10:16:23 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:10.216 10:16:23 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.216 10:16:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.475 [2024-11-19 10:16:24.161012] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:10.475 10:16:24 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.475 10:16:24 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:10.475 10:16:24 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.475 10:16:24 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.475 10:16:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:10.475 ************************************ 00:05:10.475 START TEST scheduler_create_thread 00:05:10.475 ************************************ 00:05:10.475 10:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:10.475 10:16:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:10.475 10:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.475 10:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.475 2 00:05:10.475 10:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.475 10:16:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:10.475 10:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.475 10:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.475 3 00:05:10.475 10:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.475 10:16:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:10.475 10:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.475 10:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.475 4 00:05:10.475 10:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.475 10:16:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:10.475 10:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.475 10:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.475 5 00:05:10.475 10:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.475 10:16:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:10.475 10:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.475 10:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.476 6 00:05:10.476 10:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.476 10:16:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:10.476 10:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.476 10:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.476 7 00:05:10.476 10:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.476 10:16:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:10.735 10:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.735 10:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.735 8 00:05:10.735 10:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.735 10:16:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:10.735 10:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.735 10:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.735 9 00:05:10.735 10:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.735 10:16:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:10.735 10:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.735 10:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:10.735 10 00:05:10.735 10:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.735 10:16:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:10.735 10:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.735 10:16:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.113 10:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.113 10:16:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:12.113 10:16:25 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:12.113 10:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.113 10:16:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:12.681 10:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:12.681 10:16:26 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:12.681 10:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:12.681 10:16:26 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:13.621 10:16:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:13.621 10:16:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:13.621 10:16:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:13.621 10:16:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:13.621 10:16:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.558 10:16:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.558 00:05:14.558 real 0m3.882s 00:05:14.558 user 0m0.030s 00:05:14.558 sys 0m0.008s 00:05:14.558 10:16:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.558 10:16:28 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:14.558 ************************************ 00:05:14.558 END TEST scheduler_create_thread 00:05:14.558 ************************************ 00:05:14.558 10:16:28 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:14.558 10:16:28 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58206 00:05:14.558 10:16:28 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58206 ']' 00:05:14.558 10:16:28 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58206 00:05:14.558 10:16:28 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:14.558 10:16:28 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.558 10:16:28 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58206 00:05:14.558 10:16:28 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:14.558 10:16:28 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:14.558 killing process with pid 58206 00:05:14.558 10:16:28 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58206' 00:05:14.558 10:16:28 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58206 00:05:14.558 10:16:28 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58206 00:05:14.816 [2024-11-19 10:16:28.438256] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:16.191 00:05:16.191 real 0m6.875s 00:05:16.191 user 0m14.218s 00:05:16.191 sys 0m0.502s 00:05:16.191 10:16:29 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.191 10:16:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:16.191 ************************************ 00:05:16.191 END TEST event_scheduler 00:05:16.191 ************************************ 00:05:16.191 10:16:29 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:16.191 10:16:29 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:16.191 10:16:29 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.191 10:16:29 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.191 10:16:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:16.191 ************************************ 00:05:16.191 START TEST app_repeat 00:05:16.191 ************************************ 00:05:16.191 10:16:29 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:16.191 10:16:29 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.191 10:16:29 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.191 10:16:29 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:16.191 10:16:29 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.191 10:16:29 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:16.191 10:16:29 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:16.191 10:16:29 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:16.191 10:16:29 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58324 00:05:16.191 10:16:29 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:16.191 10:16:29 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.191 Process app_repeat pid: 58324 00:05:16.191 10:16:29 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58324' 00:05:16.191 10:16:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:16.191 spdk_app_start Round 0 00:05:16.191 10:16:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:16.191 10:16:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58324 /var/tmp/spdk-nbd.sock 00:05:16.191 10:16:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58324 ']' 00:05:16.191 10:16:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:16.191 10:16:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:16.191 10:16:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:16.191 10:16:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.191 10:16:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:16.191 [2024-11-19 10:16:29.699429] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:16.191 [2024-11-19 10:16:29.699537] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58324 ] 00:05:16.191 [2024-11-19 10:16:29.853729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.191 [2024-11-19 10:16:29.963162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.191 [2024-11-19 10:16:29.963200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:17.128 10:16:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.128 10:16:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:17.128 10:16:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.128 Malloc0 00:05:17.128 10:16:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.386 Malloc1 00:05:17.386 10:16:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.386 10:16:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.386 10:16:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.386 10:16:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.386 10:16:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.386 10:16:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.386 10:16:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.386 10:16:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.386 10:16:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.386 10:16:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.386 10:16:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.386 10:16:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.386 10:16:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:17.386 10:16:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.386 10:16:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.386 10:16:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:17.644 /dev/nbd0 00:05:17.644 10:16:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:17.644 10:16:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:17.644 10:16:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:17.644 10:16:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:17.644 10:16:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:17.644 10:16:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:17.644 10:16:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:17.644 10:16:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:17.644 10:16:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:17.644 10:16:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:17.644 10:16:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.644 1+0 records in 00:05:17.644 1+0 records out 00:05:17.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000388872 s, 10.5 MB/s 00:05:17.644 10:16:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.644 10:16:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:17.644 10:16:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.644 10:16:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:17.644 10:16:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:17.644 10:16:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.644 10:16:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.644 10:16:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:17.904 /dev/nbd1 00:05:17.904 10:16:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:17.904 10:16:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:17.904 10:16:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:17.904 10:16:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:17.904 10:16:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:17.904 10:16:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:17.904 10:16:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:17.904 10:16:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:17.904 10:16:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:17.904 10:16:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:17.904 10:16:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.904 1+0 records in 00:05:17.904 1+0 records out 00:05:17.904 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198924 s, 20.6 MB/s 00:05:17.904 10:16:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.904 10:16:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:17.904 10:16:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.904 10:16:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:17.904 10:16:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:17.904 10:16:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.904 10:16:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.904 10:16:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.904 10:16:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.904 10:16:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:18.163 { 00:05:18.163 "nbd_device": "/dev/nbd0", 00:05:18.163 "bdev_name": "Malloc0" 00:05:18.163 }, 00:05:18.163 { 00:05:18.163 "nbd_device": "/dev/nbd1", 00:05:18.163 "bdev_name": "Malloc1" 00:05:18.163 } 00:05:18.163 ]' 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:18.163 { 00:05:18.163 "nbd_device": "/dev/nbd0", 00:05:18.163 "bdev_name": "Malloc0" 00:05:18.163 }, 00:05:18.163 { 00:05:18.163 "nbd_device": "/dev/nbd1", 00:05:18.163 "bdev_name": "Malloc1" 00:05:18.163 } 00:05:18.163 ]' 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:18.163 /dev/nbd1' 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:18.163 /dev/nbd1' 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:18.163 256+0 records in 00:05:18.163 256+0 records out 00:05:18.163 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111242 s, 94.3 MB/s 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:18.163 256+0 records in 00:05:18.163 256+0 records out 00:05:18.163 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244487 s, 42.9 MB/s 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.163 256+0 records in 00:05:18.163 256+0 records out 00:05:18.163 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243788 s, 43.0 MB/s 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.163 10:16:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:18.422 10:16:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:18.422 10:16:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:18.422 10:16:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:18.422 10:16:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.422 10:16:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.422 10:16:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:18.422 10:16:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.422 10:16:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.423 10:16:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.423 10:16:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:18.681 10:16:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:18.681 10:16:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:18.681 10:16:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:18.681 10:16:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.681 10:16:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.681 10:16:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:18.681 10:16:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:18.681 10:16:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.681 10:16:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.681 10:16:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.681 10:16:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.939 10:16:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:18.939 10:16:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:18.939 10:16:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.939 10:16:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:18.940 10:16:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:18.940 10:16:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.940 10:16:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:18.940 10:16:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:18.940 10:16:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:18.940 10:16:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:18.940 10:16:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:18.940 10:16:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:18.940 10:16:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:19.507 10:16:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:20.443 [2024-11-19 10:16:34.123938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.718 [2024-11-19 10:16:34.234468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.718 [2024-11-19 10:16:34.234468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.718 [2024-11-19 10:16:34.418697] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:20.718 [2024-11-19 10:16:34.418763] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:22.622 10:16:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:22.622 spdk_app_start Round 1 00:05:22.622 10:16:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:22.622 10:16:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58324 /var/tmp/spdk-nbd.sock 00:05:22.622 10:16:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58324 ']' 00:05:22.622 10:16:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.622 10:16:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.622 10:16:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.622 10:16:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.622 10:16:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:22.622 10:16:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.622 10:16:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:22.622 10:16:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.881 Malloc0 00:05:22.881 10:16:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:23.140 Malloc1 00:05:23.140 10:16:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.140 10:16:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.140 10:16:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.140 10:16:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:23.140 10:16:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.140 10:16:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:23.140 10:16:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:23.140 10:16:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.140 10:16:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:23.140 10:16:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:23.140 10:16:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.140 10:16:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:23.140 10:16:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:23.140 10:16:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:23.140 10:16:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.140 10:16:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:23.399 /dev/nbd0 00:05:23.399 10:16:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:23.399 10:16:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:23.399 10:16:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:23.399 10:16:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:23.399 10:16:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:23.399 10:16:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:23.399 10:16:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:23.399 10:16:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:23.399 10:16:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:23.399 10:16:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:23.399 10:16:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.399 1+0 records in 00:05:23.399 1+0 records out 00:05:23.399 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341598 s, 12.0 MB/s 00:05:23.399 10:16:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.399 10:16:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:23.400 10:16:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.400 10:16:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:23.400 10:16:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:23.400 10:16:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.400 10:16:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.400 10:16:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:23.658 /dev/nbd1 00:05:23.658 10:16:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:23.658 10:16:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:23.658 10:16:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:23.658 10:16:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:23.658 10:16:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:23.658 10:16:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:23.658 10:16:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:23.658 10:16:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:23.658 10:16:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:23.658 10:16:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:23.658 10:16:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.658 1+0 records in 00:05:23.658 1+0 records out 00:05:23.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358214 s, 11.4 MB/s 00:05:23.658 10:16:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.658 10:16:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:23.658 10:16:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.658 10:16:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:23.658 10:16:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:23.658 10:16:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.658 10:16:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.658 10:16:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.658 10:16:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.658 10:16:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.658 10:16:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:23.658 { 00:05:23.658 "nbd_device": "/dev/nbd0", 00:05:23.658 "bdev_name": "Malloc0" 00:05:23.658 }, 00:05:23.658 { 00:05:23.658 "nbd_device": "/dev/nbd1", 00:05:23.658 "bdev_name": "Malloc1" 00:05:23.658 } 00:05:23.658 ]' 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:23.917 { 00:05:23.917 "nbd_device": "/dev/nbd0", 00:05:23.917 "bdev_name": "Malloc0" 00:05:23.917 }, 00:05:23.917 { 00:05:23.917 "nbd_device": "/dev/nbd1", 00:05:23.917 "bdev_name": "Malloc1" 00:05:23.917 } 00:05:23.917 ]' 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:23.917 /dev/nbd1' 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:23.917 /dev/nbd1' 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:23.917 256+0 records in 00:05:23.917 256+0 records out 00:05:23.917 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138306 s, 75.8 MB/s 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:23.917 256+0 records in 00:05:23.917 256+0 records out 00:05:23.917 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024287 s, 43.2 MB/s 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:23.917 256+0 records in 00:05:23.917 256+0 records out 00:05:23.917 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219349 s, 47.8 MB/s 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:23.917 10:16:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.918 10:16:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:23.918 10:16:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.918 10:16:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.918 10:16:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:23.918 10:16:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:23.918 10:16:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.918 10:16:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:24.176 10:16:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:24.176 10:16:37 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:24.176 10:16:37 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:24.176 10:16:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.177 10:16:37 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.177 10:16:37 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:24.177 10:16:37 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.177 10:16:37 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.177 10:16:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:24.177 10:16:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:24.435 10:16:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:24.435 10:16:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:24.435 10:16:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:24.435 10:16:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.435 10:16:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.435 10:16:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:24.435 10:16:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:24.435 10:16:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.435 10:16:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.435 10:16:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.435 10:16:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.435 10:16:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:24.435 10:16:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:24.435 10:16:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.693 10:16:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:24.693 10:16:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:24.693 10:16:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.693 10:16:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:24.693 10:16:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:24.693 10:16:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:24.693 10:16:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:24.693 10:16:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:24.693 10:16:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:24.693 10:16:38 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:24.953 10:16:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:26.327 [2024-11-19 10:16:39.757228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.327 [2024-11-19 10:16:39.863088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.327 [2024-11-19 10:16:39.863148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.327 [2024-11-19 10:16:40.049970] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:26.327 [2024-11-19 10:16:40.050060] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:28.230 10:16:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:28.230 spdk_app_start Round 2 00:05:28.230 10:16:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:28.230 10:16:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58324 /var/tmp/spdk-nbd.sock 00:05:28.230 10:16:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58324 ']' 00:05:28.230 10:16:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:28.230 10:16:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:28.230 10:16:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:28.230 10:16:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.230 10:16:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.230 10:16:41 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.230 10:16:41 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:28.230 10:16:41 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.488 Malloc0 00:05:28.489 10:16:42 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.748 Malloc1 00:05:28.748 10:16:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.748 10:16:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.748 10:16:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.748 10:16:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:28.748 10:16:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.748 10:16:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:28.748 10:16:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.748 10:16:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.748 10:16:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.748 10:16:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:28.748 10:16:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.748 10:16:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:28.748 10:16:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:28.748 10:16:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:28.748 10:16:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.748 10:16:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:29.007 /dev/nbd0 00:05:29.007 10:16:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:29.007 10:16:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:29.007 10:16:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:29.007 10:16:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:29.007 10:16:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:29.007 10:16:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:29.007 10:16:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:29.007 10:16:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:29.007 10:16:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:29.007 10:16:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:29.007 10:16:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.007 1+0 records in 00:05:29.007 1+0 records out 00:05:29.007 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00373432 s, 1.1 MB/s 00:05:29.007 10:16:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.007 10:16:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:29.007 10:16:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.007 10:16:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:29.007 10:16:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:29.007 10:16:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.007 10:16:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.007 10:16:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:29.291 /dev/nbd1 00:05:29.291 10:16:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:29.291 10:16:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:29.291 10:16:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:29.291 10:16:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:29.291 10:16:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:29.291 10:16:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:29.291 10:16:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:29.291 10:16:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:29.291 10:16:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:29.291 10:16:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:29.291 10:16:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:29.291 1+0 records in 00:05:29.291 1+0 records out 00:05:29.291 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351152 s, 11.7 MB/s 00:05:29.291 10:16:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.291 10:16:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:29.291 10:16:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:29.291 10:16:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:29.291 10:16:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:29.291 10:16:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:29.291 10:16:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:29.291 10:16:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.291 10:16:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.291 10:16:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:29.559 { 00:05:29.559 "nbd_device": "/dev/nbd0", 00:05:29.559 "bdev_name": "Malloc0" 00:05:29.559 }, 00:05:29.559 { 00:05:29.559 "nbd_device": "/dev/nbd1", 00:05:29.559 "bdev_name": "Malloc1" 00:05:29.559 } 00:05:29.559 ]' 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:29.559 { 00:05:29.559 "nbd_device": "/dev/nbd0", 00:05:29.559 "bdev_name": "Malloc0" 00:05:29.559 }, 00:05:29.559 { 00:05:29.559 "nbd_device": "/dev/nbd1", 00:05:29.559 "bdev_name": "Malloc1" 00:05:29.559 } 00:05:29.559 ]' 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:29.559 /dev/nbd1' 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:29.559 /dev/nbd1' 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:29.559 256+0 records in 00:05:29.559 256+0 records out 00:05:29.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122222 s, 85.8 MB/s 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:29.559 256+0 records in 00:05:29.559 256+0 records out 00:05:29.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239014 s, 43.9 MB/s 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:29.559 256+0 records in 00:05:29.559 256+0 records out 00:05:29.559 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0258425 s, 40.6 MB/s 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.559 10:16:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:29.818 10:16:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:29.818 10:16:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:29.818 10:16:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:29.818 10:16:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.818 10:16:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.818 10:16:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:29.818 10:16:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.818 10:16:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.818 10:16:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.818 10:16:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:30.076 10:16:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:30.076 10:16:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:30.076 10:16:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:30.076 10:16:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:30.076 10:16:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:30.076 10:16:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:30.076 10:16:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:30.076 10:16:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:30.076 10:16:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.076 10:16:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.076 10:16:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:30.335 10:16:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:30.335 10:16:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:30.335 10:16:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:30.335 10:16:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:30.335 10:16:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:30.335 10:16:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:30.335 10:16:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:30.335 10:16:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:30.335 10:16:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:30.335 10:16:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:30.335 10:16:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:30.335 10:16:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:30.335 10:16:43 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:30.594 10:16:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:31.970 [2024-11-19 10:16:45.404479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:31.970 [2024-11-19 10:16:45.510285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.970 [2024-11-19 10:16:45.510292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.970 [2024-11-19 10:16:45.696782] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:31.971 [2024-11-19 10:16:45.696871] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:33.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:33.875 10:16:47 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58324 /var/tmp/spdk-nbd.sock 00:05:33.875 10:16:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58324 ']' 00:05:33.875 10:16:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:33.875 10:16:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.875 10:16:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:33.875 10:16:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.875 10:16:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:33.875 10:16:47 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.875 10:16:47 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:33.875 10:16:47 event.app_repeat -- event/event.sh@39 -- # killprocess 58324 00:05:33.875 10:16:47 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58324 ']' 00:05:33.875 10:16:47 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58324 00:05:33.875 10:16:47 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:33.875 10:16:47 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.875 10:16:47 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58324 00:05:33.875 10:16:47 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.875 10:16:47 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.875 10:16:47 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58324' 00:05:33.875 killing process with pid 58324 00:05:33.875 10:16:47 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58324 00:05:33.875 10:16:47 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58324 00:05:34.811 spdk_app_start is called in Round 0. 00:05:34.811 Shutdown signal received, stop current app iteration 00:05:34.811 Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 reinitialization... 00:05:34.811 spdk_app_start is called in Round 1. 00:05:34.811 Shutdown signal received, stop current app iteration 00:05:34.811 Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 reinitialization... 00:05:34.811 spdk_app_start is called in Round 2. 00:05:34.811 Shutdown signal received, stop current app iteration 00:05:34.811 Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 reinitialization... 00:05:34.811 spdk_app_start is called in Round 3. 00:05:34.811 Shutdown signal received, stop current app iteration 00:05:34.811 10:16:48 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:34.811 ************************************ 00:05:34.811 END TEST app_repeat 00:05:34.811 ************************************ 00:05:34.811 10:16:48 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:34.811 00:05:34.811 real 0m18.938s 00:05:34.811 user 0m40.550s 00:05:34.811 sys 0m2.656s 00:05:34.811 10:16:48 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.811 10:16:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.070 10:16:48 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:35.070 10:16:48 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:35.070 10:16:48 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.070 10:16:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.070 10:16:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.070 ************************************ 00:05:35.070 START TEST cpu_locks 00:05:35.070 ************************************ 00:05:35.070 10:16:48 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:35.070 * Looking for test storage... 00:05:35.070 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:35.070 10:16:48 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:35.070 10:16:48 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:35.070 10:16:48 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:35.070 10:16:48 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:35.070 10:16:48 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.070 10:16:48 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.070 10:16:48 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.070 10:16:48 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.070 10:16:48 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.070 10:16:48 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.070 10:16:48 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.070 10:16:48 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.070 10:16:48 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.070 10:16:48 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.070 10:16:48 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.070 10:16:48 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:35.070 10:16:48 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:35.070 10:16:48 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.070 10:16:48 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.070 10:16:48 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:35.070 10:16:48 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:35.070 10:16:48 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.070 10:16:48 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:35.070 10:16:48 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.070 10:16:48 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:35.070 10:16:48 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:35.070 10:16:48 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.329 10:16:48 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:35.329 10:16:48 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.329 10:16:48 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.329 10:16:48 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.329 10:16:48 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:35.329 10:16:48 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.329 10:16:48 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:35.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.329 --rc genhtml_branch_coverage=1 00:05:35.329 --rc genhtml_function_coverage=1 00:05:35.329 --rc genhtml_legend=1 00:05:35.329 --rc geninfo_all_blocks=1 00:05:35.329 --rc geninfo_unexecuted_blocks=1 00:05:35.329 00:05:35.329 ' 00:05:35.329 10:16:48 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:35.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.329 --rc genhtml_branch_coverage=1 00:05:35.329 --rc genhtml_function_coverage=1 00:05:35.329 --rc genhtml_legend=1 00:05:35.329 --rc geninfo_all_blocks=1 00:05:35.329 --rc geninfo_unexecuted_blocks=1 00:05:35.329 00:05:35.329 ' 00:05:35.329 10:16:48 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:35.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.329 --rc genhtml_branch_coverage=1 00:05:35.329 --rc genhtml_function_coverage=1 00:05:35.329 --rc genhtml_legend=1 00:05:35.329 --rc geninfo_all_blocks=1 00:05:35.329 --rc geninfo_unexecuted_blocks=1 00:05:35.329 00:05:35.329 ' 00:05:35.329 10:16:48 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:35.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.329 --rc genhtml_branch_coverage=1 00:05:35.329 --rc genhtml_function_coverage=1 00:05:35.329 --rc genhtml_legend=1 00:05:35.329 --rc geninfo_all_blocks=1 00:05:35.329 --rc geninfo_unexecuted_blocks=1 00:05:35.329 00:05:35.329 ' 00:05:35.329 10:16:48 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:35.329 10:16:48 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:35.329 10:16:48 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:35.329 10:16:48 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:35.329 10:16:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.329 10:16:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.329 10:16:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.329 ************************************ 00:05:35.329 START TEST default_locks 00:05:35.329 ************************************ 00:05:35.329 10:16:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:35.329 10:16:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58766 00:05:35.329 10:16:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.330 10:16:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58766 00:05:35.330 10:16:48 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58766 ']' 00:05:35.330 10:16:48 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.330 10:16:48 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.330 10:16:48 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.330 10:16:48 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.330 10:16:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.330 [2024-11-19 10:16:48.966169] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:35.330 [2024-11-19 10:16:48.966398] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58766 ] 00:05:35.589 [2024-11-19 10:16:49.139135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.589 [2024-11-19 10:16:49.247259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.523 10:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.523 10:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:36.523 10:16:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58766 00:05:36.523 10:16:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58766 00:05:36.523 10:16:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.523 10:16:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58766 00:05:36.523 10:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58766 ']' 00:05:36.523 10:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58766 00:05:36.523 10:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:36.523 10:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.523 10:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58766 00:05:36.781 10:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.782 10:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.782 10:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58766' 00:05:36.782 killing process with pid 58766 00:05:36.782 10:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58766 00:05:36.782 10:16:50 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58766 00:05:39.313 10:16:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58766 00:05:39.313 10:16:52 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:39.313 10:16:52 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58766 00:05:39.313 10:16:52 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:39.313 10:16:52 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.313 10:16:52 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:39.313 10:16:52 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.313 10:16:52 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58766 00:05:39.313 10:16:52 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58766 ']' 00:05:39.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.313 10:16:52 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.313 10:16:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.313 10:16:52 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.313 ERROR: process (pid: 58766) is no longer running 00:05:39.313 10:16:52 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.313 10:16:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.313 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58766) - No such process 00:05:39.313 10:16:52 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.313 10:16:52 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:39.313 10:16:52 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:39.313 10:16:52 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:39.313 10:16:52 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:39.313 10:16:52 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:39.313 10:16:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:39.313 10:16:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:39.313 10:16:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:39.313 10:16:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:39.313 00:05:39.313 real 0m3.738s 00:05:39.313 user 0m3.670s 00:05:39.313 sys 0m0.539s 00:05:39.313 ************************************ 00:05:39.313 END TEST default_locks 00:05:39.313 ************************************ 00:05:39.313 10:16:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.313 10:16:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.313 10:16:52 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:39.313 10:16:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.313 10:16:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.313 10:16:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.313 ************************************ 00:05:39.313 START TEST default_locks_via_rpc 00:05:39.313 ************************************ 00:05:39.313 10:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:39.313 10:16:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58835 00:05:39.313 10:16:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.313 10:16:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58835 00:05:39.313 10:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58835 ']' 00:05:39.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.313 10:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.313 10:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.313 10:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.313 10:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.313 10:16:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.313 [2024-11-19 10:16:52.771414] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:39.313 [2024-11-19 10:16:52.771526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58835 ] 00:05:39.313 [2024-11-19 10:16:52.928288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.313 [2024-11-19 10:16:53.037645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.249 10:16:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.249 10:16:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:40.249 10:16:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:40.249 10:16:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.249 10:16:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.249 10:16:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.249 10:16:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:40.249 10:16:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:40.249 10:16:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:40.249 10:16:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:40.249 10:16:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:40.249 10:16:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.249 10:16:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.249 10:16:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.249 10:16:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58835 00:05:40.249 10:16:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58835 00:05:40.249 10:16:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:40.508 10:16:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58835 00:05:40.508 10:16:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58835 ']' 00:05:40.508 10:16:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58835 00:05:40.508 10:16:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:40.508 10:16:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.508 10:16:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58835 00:05:40.508 killing process with pid 58835 00:05:40.508 10:16:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.508 10:16:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.508 10:16:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58835' 00:05:40.508 10:16:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58835 00:05:40.508 10:16:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58835 00:05:43.039 00:05:43.039 real 0m3.898s 00:05:43.039 user 0m3.842s 00:05:43.039 sys 0m0.611s 00:05:43.039 10:16:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.039 10:16:56 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.039 ************************************ 00:05:43.039 END TEST default_locks_via_rpc 00:05:43.039 ************************************ 00:05:43.039 10:16:56 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:43.039 10:16:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.039 10:16:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.039 10:16:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:43.039 ************************************ 00:05:43.039 START TEST non_locking_app_on_locked_coremask 00:05:43.039 ************************************ 00:05:43.039 10:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:43.039 10:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58909 00:05:43.039 10:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.039 10:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58909 /var/tmp/spdk.sock 00:05:43.039 10:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58909 ']' 00:05:43.039 10:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.039 10:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.039 10:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.039 10:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.039 10:16:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:43.039 [2024-11-19 10:16:56.729628] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:43.039 [2024-11-19 10:16:56.730177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58909 ] 00:05:43.297 [2024-11-19 10:16:56.890109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.297 [2024-11-19 10:16:57.000690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.232 10:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.232 10:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:44.232 10:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58925 00:05:44.232 10:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:44.232 10:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58925 /var/tmp/spdk2.sock 00:05:44.232 10:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58925 ']' 00:05:44.232 10:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.232 10:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.232 10:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.232 10:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.232 10:16:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.232 [2024-11-19 10:16:57.907527] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:44.232 [2024-11-19 10:16:57.907733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58925 ] 00:05:44.489 [2024-11-19 10:16:58.079463] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:44.489 [2024-11-19 10:16:58.079520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.745 [2024-11-19 10:16:58.305451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.277 10:17:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.277 10:17:00 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:47.277 10:17:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58909 00:05:47.277 10:17:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58909 00:05:47.277 10:17:00 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.537 10:17:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58909 00:05:47.537 10:17:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58909 ']' 00:05:47.537 10:17:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58909 00:05:47.537 10:17:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:47.537 10:17:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.537 10:17:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58909 00:05:47.537 10:17:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.537 killing process with pid 58909 00:05:47.537 10:17:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.537 10:17:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58909' 00:05:47.537 10:17:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58909 00:05:47.537 10:17:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58909 00:05:52.810 10:17:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58925 00:05:52.810 10:17:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58925 ']' 00:05:52.810 10:17:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58925 00:05:52.810 10:17:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:52.810 10:17:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.810 10:17:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58925 00:05:52.810 killing process with pid 58925 00:05:52.810 10:17:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:52.810 10:17:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:52.810 10:17:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58925' 00:05:52.810 10:17:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58925 00:05:52.810 10:17:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58925 00:05:54.755 00:05:54.755 real 0m11.671s 00:05:54.755 user 0m11.924s 00:05:54.755 sys 0m1.361s 00:05:54.755 10:17:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.755 ************************************ 00:05:54.755 END TEST non_locking_app_on_locked_coremask 00:05:54.755 ************************************ 00:05:54.755 10:17:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.755 10:17:08 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:54.755 10:17:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.755 10:17:08 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.755 10:17:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.755 ************************************ 00:05:54.755 START TEST locking_app_on_unlocked_coremask 00:05:54.755 ************************************ 00:05:54.755 10:17:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:54.755 10:17:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59081 00:05:54.755 10:17:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:54.755 10:17:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59081 /var/tmp/spdk.sock 00:05:54.755 10:17:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59081 ']' 00:05:54.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.755 10:17:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.755 10:17:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.755 10:17:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.755 10:17:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.755 10:17:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.755 [2024-11-19 10:17:08.470123] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:54.755 [2024-11-19 10:17:08.470735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59081 ] 00:05:55.014 [2024-11-19 10:17:08.646017] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:55.014 [2024-11-19 10:17:08.646189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.014 [2024-11-19 10:17:08.755654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.956 10:17:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.956 10:17:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:55.956 10:17:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:55.956 10:17:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59097 00:05:55.956 10:17:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59097 /var/tmp/spdk2.sock 00:05:55.956 10:17:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59097 ']' 00:05:55.956 10:17:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.956 10:17:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.956 10:17:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.956 10:17:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.956 10:17:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.956 [2024-11-19 10:17:09.660162] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:05:55.956 [2024-11-19 10:17:09.660415] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59097 ] 00:05:56.215 [2024-11-19 10:17:09.837967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.477 [2024-11-19 10:17:10.051523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.022 10:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.022 10:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:59.022 10:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59097 00:05:59.022 10:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59097 00:05:59.022 10:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:59.022 10:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59081 00:05:59.022 10:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59081 ']' 00:05:59.022 10:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59081 00:05:59.022 10:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:59.022 10:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.022 10:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59081 00:05:59.022 10:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.022 killing process with pid 59081 00:05:59.022 10:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.022 10:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59081' 00:05:59.022 10:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59081 00:05:59.022 10:17:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59081 00:06:04.295 10:17:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59097 00:06:04.295 10:17:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59097 ']' 00:06:04.295 10:17:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59097 00:06:04.295 10:17:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:04.295 10:17:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.295 10:17:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59097 00:06:04.295 10:17:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.295 10:17:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.295 10:17:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59097' 00:06:04.295 killing process with pid 59097 00:06:04.295 10:17:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59097 00:06:04.295 10:17:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59097 00:06:06.206 00:06:06.206 real 0m11.105s 00:06:06.206 user 0m11.364s 00:06:06.206 sys 0m1.203s 00:06:06.206 ************************************ 00:06:06.206 END TEST locking_app_on_unlocked_coremask 00:06:06.206 ************************************ 00:06:06.206 10:17:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.206 10:17:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.206 10:17:19 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:06.206 10:17:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.206 10:17:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.206 10:17:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.206 ************************************ 00:06:06.206 START TEST locking_app_on_locked_coremask 00:06:06.206 ************************************ 00:06:06.206 10:17:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:06.206 10:17:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59241 00:06:06.206 10:17:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:06.206 10:17:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59241 /var/tmp/spdk.sock 00:06:06.206 10:17:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59241 ']' 00:06:06.206 10:17:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.206 10:17:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.206 10:17:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.206 10:17:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.206 10:17:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.206 [2024-11-19 10:17:19.628300] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:06.206 [2024-11-19 10:17:19.628526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59241 ] 00:06:06.206 [2024-11-19 10:17:19.788788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.206 [2024-11-19 10:17:19.895082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.144 10:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.144 10:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:07.144 10:17:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:07.144 10:17:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59257 00:06:07.144 10:17:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59257 /var/tmp/spdk2.sock 00:06:07.144 10:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:07.144 10:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59257 /var/tmp/spdk2.sock 00:06:07.144 10:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:07.144 10:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.144 10:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:07.144 10:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:07.144 10:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59257 /var/tmp/spdk2.sock 00:06:07.144 10:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59257 ']' 00:06:07.144 10:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.144 10:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.144 10:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.144 10:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.144 10:17:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.144 [2024-11-19 10:17:20.844372] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:07.144 [2024-11-19 10:17:20.844601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59257 ] 00:06:07.404 [2024-11-19 10:17:21.035864] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59241 has claimed it. 00:06:07.404 [2024-11-19 10:17:21.035929] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:07.974 ERROR: process (pid: 59257) is no longer running 00:06:07.974 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59257) - No such process 00:06:07.974 10:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.974 10:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:07.974 10:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:07.974 10:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:07.974 10:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:07.974 10:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:07.974 10:17:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59241 00:06:07.974 10:17:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.974 10:17:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59241 00:06:07.974 10:17:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59241 00:06:07.974 10:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59241 ']' 00:06:07.974 10:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59241 00:06:07.974 10:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:07.974 10:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.974 10:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59241 00:06:08.234 10:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.234 10:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.234 killing process with pid 59241 00:06:08.234 10:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59241' 00:06:08.234 10:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59241 00:06:08.234 10:17:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59241 00:06:10.773 00:06:10.773 real 0m4.460s 00:06:10.773 user 0m4.652s 00:06:10.773 sys 0m0.701s 00:06:10.773 10:17:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.773 10:17:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.773 ************************************ 00:06:10.773 END TEST locking_app_on_locked_coremask 00:06:10.773 ************************************ 00:06:10.773 10:17:24 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:10.773 10:17:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.773 10:17:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.773 10:17:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.773 ************************************ 00:06:10.773 START TEST locking_overlapped_coremask 00:06:10.773 ************************************ 00:06:10.773 10:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:10.773 10:17:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59328 00:06:10.773 10:17:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:10.773 10:17:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59328 /var/tmp/spdk.sock 00:06:10.773 10:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59328 ']' 00:06:10.773 10:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.773 10:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.773 10:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.773 10:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.773 10:17:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.773 [2024-11-19 10:17:24.156232] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:10.773 [2024-11-19 10:17:24.156439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59328 ] 00:06:10.773 [2024-11-19 10:17:24.332391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.773 [2024-11-19 10:17:24.441685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.773 [2024-11-19 10:17:24.441811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.773 [2024-11-19 10:17:24.441850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.712 10:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.712 10:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:11.712 10:17:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:11.712 10:17:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59346 00:06:11.712 10:17:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59346 /var/tmp/spdk2.sock 00:06:11.712 10:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:11.712 10:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59346 /var/tmp/spdk2.sock 00:06:11.712 10:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:11.712 10:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.712 10:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:11.712 10:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.712 10:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59346 /var/tmp/spdk2.sock 00:06:11.712 10:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59346 ']' 00:06:11.712 10:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.712 10:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.712 10:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.712 10:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.712 10:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.712 [2024-11-19 10:17:25.344655] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:11.712 [2024-11-19 10:17:25.344854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59346 ] 00:06:11.972 [2024-11-19 10:17:25.512688] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59328 has claimed it. 00:06:11.972 [2024-11-19 10:17:25.512769] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:12.233 ERROR: process (pid: 59346) is no longer running 00:06:12.233 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59346) - No such process 00:06:12.233 10:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.233 10:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:12.233 10:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:12.233 10:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:12.233 10:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:12.233 10:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:12.233 10:17:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:12.233 10:17:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:12.233 10:17:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:12.233 10:17:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:12.233 10:17:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59328 00:06:12.233 10:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59328 ']' 00:06:12.233 10:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59328 00:06:12.233 10:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:12.233 10:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.233 10:17:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59328 00:06:12.493 10:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.493 10:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.493 10:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59328' 00:06:12.493 killing process with pid 59328 00:06:12.493 10:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59328 00:06:12.493 10:17:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59328 00:06:15.032 00:06:15.032 real 0m4.286s 00:06:15.032 user 0m11.653s 00:06:15.032 sys 0m0.557s 00:06:15.032 10:17:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.032 10:17:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.032 ************************************ 00:06:15.032 END TEST locking_overlapped_coremask 00:06:15.032 ************************************ 00:06:15.032 10:17:28 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:15.032 10:17:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.032 10:17:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.032 10:17:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.032 ************************************ 00:06:15.032 START TEST locking_overlapped_coremask_via_rpc 00:06:15.032 ************************************ 00:06:15.032 10:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:15.032 10:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59410 00:06:15.032 10:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59410 /var/tmp/spdk.sock 00:06:15.032 10:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:15.032 10:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59410 ']' 00:06:15.032 10:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.032 10:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.032 10:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.032 10:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.032 10:17:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.032 [2024-11-19 10:17:28.508725] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:15.032 [2024-11-19 10:17:28.508842] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59410 ] 00:06:15.032 [2024-11-19 10:17:28.682254] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.032 [2024-11-19 10:17:28.682306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:15.032 [2024-11-19 10:17:28.795065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.032 [2024-11-19 10:17:28.795185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.032 [2024-11-19 10:17:28.795219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.974 10:17:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.974 10:17:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:15.974 10:17:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59428 00:06:15.974 10:17:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59428 /var/tmp/spdk2.sock 00:06:15.974 10:17:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:15.974 10:17:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59428 ']' 00:06:15.974 10:17:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.974 10:17:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.975 10:17:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.975 10:17:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.975 10:17:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:15.975 [2024-11-19 10:17:29.721104] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:15.975 [2024-11-19 10:17:29.721295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59428 ] 00:06:16.234 [2024-11-19 10:17:29.888543] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.234 [2024-11-19 10:17:29.888593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:16.493 [2024-11-19 10:17:30.119164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:16.493 [2024-11-19 10:17:30.119293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.493 [2024-11-19 10:17:30.119342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:19.033 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.033 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:19.033 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:19.033 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.033 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.033 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:19.033 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:19.033 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:19.033 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:19.033 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:19.033 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.033 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:19.033 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:19.033 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:19.033 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:19.033 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.033 [2024-11-19 10:17:32.279199] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59410 has claimed it. 00:06:19.033 request: 00:06:19.033 { 00:06:19.033 "method": "framework_enable_cpumask_locks", 00:06:19.033 "req_id": 1 00:06:19.033 } 00:06:19.033 Got JSON-RPC error response 00:06:19.033 response: 00:06:19.033 { 00:06:19.033 "code": -32603, 00:06:19.033 "message": "Failed to claim CPU core: 2" 00:06:19.033 } 00:06:19.033 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:19.033 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:19.033 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:19.033 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:19.033 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:19.034 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59410 /var/tmp/spdk.sock 00:06:19.034 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59410 ']' 00:06:19.034 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.034 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.034 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.034 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.034 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.034 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.034 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:19.034 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59428 /var/tmp/spdk2.sock 00:06:19.034 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59428 ']' 00:06:19.034 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.034 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.034 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.034 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.034 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.034 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.034 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:19.034 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:19.034 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:19.034 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:19.034 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:19.034 00:06:19.034 real 0m4.276s 00:06:19.034 user 0m1.207s 00:06:19.034 sys 0m0.200s 00:06:19.034 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.034 10:17:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:19.034 ************************************ 00:06:19.034 END TEST locking_overlapped_coremask_via_rpc 00:06:19.034 ************************************ 00:06:19.034 10:17:32 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:19.034 10:17:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59410 ]] 00:06:19.034 10:17:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59410 00:06:19.034 10:17:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59410 ']' 00:06:19.034 10:17:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59410 00:06:19.034 10:17:32 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:19.034 10:17:32 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.034 10:17:32 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59410 00:06:19.034 10:17:32 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.034 10:17:32 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.034 10:17:32 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59410' 00:06:19.034 killing process with pid 59410 00:06:19.034 10:17:32 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59410 00:06:19.034 10:17:32 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59410 00:06:21.572 10:17:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59428 ]] 00:06:21.572 10:17:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59428 00:06:21.572 10:17:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59428 ']' 00:06:21.572 10:17:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59428 00:06:21.572 10:17:35 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:21.572 10:17:35 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.572 10:17:35 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59428 00:06:21.572 10:17:35 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:21.572 10:17:35 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:21.572 10:17:35 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59428' 00:06:21.572 killing process with pid 59428 00:06:21.572 10:17:35 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59428 00:06:21.573 10:17:35 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59428 00:06:24.112 10:17:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:24.112 Process with pid 59410 is not found 00:06:24.112 10:17:37 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:24.112 10:17:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59410 ]] 00:06:24.112 10:17:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59410 00:06:24.112 10:17:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59410 ']' 00:06:24.112 10:17:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59410 00:06:24.112 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59410) - No such process 00:06:24.112 10:17:37 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59410 is not found' 00:06:24.112 10:17:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59428 ]] 00:06:24.112 Process with pid 59428 is not found 00:06:24.112 10:17:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59428 00:06:24.112 10:17:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59428 ']' 00:06:24.112 10:17:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59428 00:06:24.112 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59428) - No such process 00:06:24.112 10:17:37 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59428 is not found' 00:06:24.112 10:17:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:24.112 ************************************ 00:06:24.112 END TEST cpu_locks 00:06:24.112 ************************************ 00:06:24.112 00:06:24.112 real 0m48.881s 00:06:24.112 user 1m23.571s 00:06:24.112 sys 0m6.318s 00:06:24.112 10:17:37 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.112 10:17:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.112 ************************************ 00:06:24.112 END TEST event 00:06:24.112 ************************************ 00:06:24.112 00:06:24.112 real 1m19.958s 00:06:24.112 user 2m25.637s 00:06:24.112 sys 0m10.130s 00:06:24.112 10:17:37 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.112 10:17:37 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.112 10:17:37 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:24.112 10:17:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.112 10:17:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.112 10:17:37 -- common/autotest_common.sh@10 -- # set +x 00:06:24.112 ************************************ 00:06:24.112 START TEST thread 00:06:24.112 ************************************ 00:06:24.113 10:17:37 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:24.113 * Looking for test storage... 00:06:24.113 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:24.113 10:17:37 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:24.113 10:17:37 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:24.113 10:17:37 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:24.113 10:17:37 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:24.113 10:17:37 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.113 10:17:37 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.113 10:17:37 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.113 10:17:37 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.113 10:17:37 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.113 10:17:37 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.113 10:17:37 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.113 10:17:37 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.113 10:17:37 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.113 10:17:37 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.113 10:17:37 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.113 10:17:37 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:24.113 10:17:37 thread -- scripts/common.sh@345 -- # : 1 00:06:24.113 10:17:37 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.113 10:17:37 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.113 10:17:37 thread -- scripts/common.sh@365 -- # decimal 1 00:06:24.113 10:17:37 thread -- scripts/common.sh@353 -- # local d=1 00:06:24.113 10:17:37 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.113 10:17:37 thread -- scripts/common.sh@355 -- # echo 1 00:06:24.113 10:17:37 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.113 10:17:37 thread -- scripts/common.sh@366 -- # decimal 2 00:06:24.113 10:17:37 thread -- scripts/common.sh@353 -- # local d=2 00:06:24.113 10:17:37 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.113 10:17:37 thread -- scripts/common.sh@355 -- # echo 2 00:06:24.113 10:17:37 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.113 10:17:37 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.113 10:17:37 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.113 10:17:37 thread -- scripts/common.sh@368 -- # return 0 00:06:24.113 10:17:37 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.113 10:17:37 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:24.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.113 --rc genhtml_branch_coverage=1 00:06:24.113 --rc genhtml_function_coverage=1 00:06:24.113 --rc genhtml_legend=1 00:06:24.113 --rc geninfo_all_blocks=1 00:06:24.113 --rc geninfo_unexecuted_blocks=1 00:06:24.113 00:06:24.113 ' 00:06:24.113 10:17:37 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:24.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.113 --rc genhtml_branch_coverage=1 00:06:24.113 --rc genhtml_function_coverage=1 00:06:24.113 --rc genhtml_legend=1 00:06:24.113 --rc geninfo_all_blocks=1 00:06:24.113 --rc geninfo_unexecuted_blocks=1 00:06:24.113 00:06:24.113 ' 00:06:24.113 10:17:37 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:24.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.113 --rc genhtml_branch_coverage=1 00:06:24.113 --rc genhtml_function_coverage=1 00:06:24.113 --rc genhtml_legend=1 00:06:24.113 --rc geninfo_all_blocks=1 00:06:24.113 --rc geninfo_unexecuted_blocks=1 00:06:24.113 00:06:24.113 ' 00:06:24.113 10:17:37 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:24.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.113 --rc genhtml_branch_coverage=1 00:06:24.113 --rc genhtml_function_coverage=1 00:06:24.113 --rc genhtml_legend=1 00:06:24.113 --rc geninfo_all_blocks=1 00:06:24.113 --rc geninfo_unexecuted_blocks=1 00:06:24.113 00:06:24.113 ' 00:06:24.113 10:17:37 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:24.113 10:17:37 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:24.113 10:17:37 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.113 10:17:37 thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.113 ************************************ 00:06:24.113 START TEST thread_poller_perf 00:06:24.113 ************************************ 00:06:24.113 10:17:37 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:24.373 [2024-11-19 10:17:37.919111] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:24.373 [2024-11-19 10:17:37.919668] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59623 ] 00:06:24.373 [2024-11-19 10:17:38.088597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.632 [2024-11-19 10:17:38.192633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.632 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:26.036 [2024-11-19T10:17:39.817Z] ====================================== 00:06:26.036 [2024-11-19T10:17:39.817Z] busy:2296457152 (cyc) 00:06:26.036 [2024-11-19T10:17:39.817Z] total_run_count: 425000 00:06:26.036 [2024-11-19T10:17:39.817Z] tsc_hz: 2290000000 (cyc) 00:06:26.036 [2024-11-19T10:17:39.817Z] ====================================== 00:06:26.036 [2024-11-19T10:17:39.817Z] poller_cost: 5403 (cyc), 2359 (nsec) 00:06:26.036 00:06:26.036 ************************************ 00:06:26.036 END TEST thread_poller_perf 00:06:26.036 ************************************ 00:06:26.036 real 0m1.535s 00:06:26.036 user 0m1.340s 00:06:26.036 sys 0m0.089s 00:06:26.036 10:17:39 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.036 10:17:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:26.036 10:17:39 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:26.036 10:17:39 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:26.036 10:17:39 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.036 10:17:39 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.036 ************************************ 00:06:26.036 START TEST thread_poller_perf 00:06:26.036 ************************************ 00:06:26.036 10:17:39 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:26.036 [2024-11-19 10:17:39.518666] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:26.036 [2024-11-19 10:17:39.518801] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59660 ] 00:06:26.036 [2024-11-19 10:17:39.691732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.036 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:26.036 [2024-11-19 10:17:39.798050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.415 [2024-11-19T10:17:41.196Z] ====================================== 00:06:27.415 [2024-11-19T10:17:41.196Z] busy:2293352706 (cyc) 00:06:27.415 [2024-11-19T10:17:41.196Z] total_run_count: 5542000 00:06:27.415 [2024-11-19T10:17:41.196Z] tsc_hz: 2290000000 (cyc) 00:06:27.415 [2024-11-19T10:17:41.196Z] ====================================== 00:06:27.415 [2024-11-19T10:17:41.196Z] poller_cost: 413 (cyc), 180 (nsec) 00:06:27.415 00:06:27.415 real 0m1.540s 00:06:27.415 user 0m1.332s 00:06:27.415 sys 0m0.101s 00:06:27.415 ************************************ 00:06:27.415 END TEST thread_poller_perf 00:06:27.415 ************************************ 00:06:27.415 10:17:41 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.415 10:17:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:27.415 10:17:41 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:27.415 ************************************ 00:06:27.415 END TEST thread 00:06:27.415 ************************************ 00:06:27.415 00:06:27.415 real 0m3.431s 00:06:27.415 user 0m2.819s 00:06:27.415 sys 0m0.406s 00:06:27.415 10:17:41 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.415 10:17:41 thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.415 10:17:41 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:27.415 10:17:41 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:27.415 10:17:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.415 10:17:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.415 10:17:41 -- common/autotest_common.sh@10 -- # set +x 00:06:27.415 ************************************ 00:06:27.415 START TEST app_cmdline 00:06:27.415 ************************************ 00:06:27.416 10:17:41 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:27.675 * Looking for test storage... 00:06:27.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:27.675 10:17:41 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:27.675 10:17:41 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:27.675 10:17:41 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:27.675 10:17:41 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:27.675 10:17:41 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.675 10:17:41 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.675 10:17:41 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.675 10:17:41 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.675 10:17:41 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.675 10:17:41 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.675 10:17:41 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.675 10:17:41 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.675 10:17:41 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.675 10:17:41 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.675 10:17:41 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.675 10:17:41 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:27.675 10:17:41 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:27.675 10:17:41 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.675 10:17:41 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.675 10:17:41 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:27.675 10:17:41 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:27.675 10:17:41 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.675 10:17:41 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:27.675 10:17:41 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.675 10:17:41 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:27.675 10:17:41 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:27.675 10:17:41 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.675 10:17:41 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:27.675 10:17:41 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.675 10:17:41 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.675 10:17:41 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.675 10:17:41 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:27.675 10:17:41 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.675 10:17:41 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:27.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.675 --rc genhtml_branch_coverage=1 00:06:27.675 --rc genhtml_function_coverage=1 00:06:27.675 --rc genhtml_legend=1 00:06:27.675 --rc geninfo_all_blocks=1 00:06:27.675 --rc geninfo_unexecuted_blocks=1 00:06:27.675 00:06:27.675 ' 00:06:27.675 10:17:41 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:27.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.675 --rc genhtml_branch_coverage=1 00:06:27.675 --rc genhtml_function_coverage=1 00:06:27.675 --rc genhtml_legend=1 00:06:27.675 --rc geninfo_all_blocks=1 00:06:27.675 --rc geninfo_unexecuted_blocks=1 00:06:27.675 00:06:27.675 ' 00:06:27.675 10:17:41 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:27.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.675 --rc genhtml_branch_coverage=1 00:06:27.675 --rc genhtml_function_coverage=1 00:06:27.675 --rc genhtml_legend=1 00:06:27.675 --rc geninfo_all_blocks=1 00:06:27.675 --rc geninfo_unexecuted_blocks=1 00:06:27.675 00:06:27.675 ' 00:06:27.675 10:17:41 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:27.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.675 --rc genhtml_branch_coverage=1 00:06:27.675 --rc genhtml_function_coverage=1 00:06:27.675 --rc genhtml_legend=1 00:06:27.675 --rc geninfo_all_blocks=1 00:06:27.675 --rc geninfo_unexecuted_blocks=1 00:06:27.675 00:06:27.675 ' 00:06:27.676 10:17:41 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:27.676 10:17:41 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59749 00:06:27.676 10:17:41 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:27.676 10:17:41 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59749 00:06:27.676 10:17:41 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59749 ']' 00:06:27.676 10:17:41 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.676 10:17:41 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.676 10:17:41 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.676 10:17:41 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.676 10:17:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:27.934 [2024-11-19 10:17:41.457166] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:27.934 [2024-11-19 10:17:41.457297] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59749 ] 00:06:27.934 [2024-11-19 10:17:41.628870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.193 [2024-11-19 10:17:41.735866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.776 10:17:42 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.776 10:17:42 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:28.776 10:17:42 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:29.036 { 00:06:29.036 "version": "SPDK v25.01-pre git sha1 dcc2ca8f3", 00:06:29.036 "fields": { 00:06:29.036 "major": 25, 00:06:29.036 "minor": 1, 00:06:29.036 "patch": 0, 00:06:29.036 "suffix": "-pre", 00:06:29.036 "commit": "dcc2ca8f3" 00:06:29.036 } 00:06:29.036 } 00:06:29.036 10:17:42 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:29.036 10:17:42 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:29.036 10:17:42 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:29.036 10:17:42 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:29.036 10:17:42 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:29.036 10:17:42 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.036 10:17:42 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:29.036 10:17:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:29.036 10:17:42 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:29.036 10:17:42 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.036 10:17:42 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:29.036 10:17:42 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:29.036 10:17:42 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:29.036 10:17:42 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:29.036 10:17:42 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:29.036 10:17:42 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:29.036 10:17:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.036 10:17:42 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:29.036 10:17:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.036 10:17:42 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:29.036 10:17:42 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.036 10:17:42 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:29.036 10:17:42 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:29.036 10:17:42 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:29.296 request: 00:06:29.296 { 00:06:29.296 "method": "env_dpdk_get_mem_stats", 00:06:29.296 "req_id": 1 00:06:29.296 } 00:06:29.296 Got JSON-RPC error response 00:06:29.296 response: 00:06:29.296 { 00:06:29.296 "code": -32601, 00:06:29.296 "message": "Method not found" 00:06:29.296 } 00:06:29.296 10:17:42 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:29.296 10:17:42 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:29.296 10:17:42 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:29.296 10:17:42 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:29.296 10:17:42 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59749 00:06:29.296 10:17:42 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59749 ']' 00:06:29.296 10:17:42 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59749 00:06:29.296 10:17:42 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:29.296 10:17:43 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.296 10:17:43 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59749 00:06:29.296 killing process with pid 59749 00:06:29.296 10:17:43 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.296 10:17:43 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.296 10:17:43 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59749' 00:06:29.296 10:17:43 app_cmdline -- common/autotest_common.sh@973 -- # kill 59749 00:06:29.296 10:17:43 app_cmdline -- common/autotest_common.sh@978 -- # wait 59749 00:06:31.838 ************************************ 00:06:31.838 END TEST app_cmdline 00:06:31.838 ************************************ 00:06:31.838 00:06:31.838 real 0m4.152s 00:06:31.838 user 0m4.367s 00:06:31.838 sys 0m0.609s 00:06:31.838 10:17:45 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.838 10:17:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:31.838 10:17:45 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:31.838 10:17:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.838 10:17:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.838 10:17:45 -- common/autotest_common.sh@10 -- # set +x 00:06:31.838 ************************************ 00:06:31.838 START TEST version 00:06:31.838 ************************************ 00:06:31.838 10:17:45 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:31.838 * Looking for test storage... 00:06:31.838 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:31.838 10:17:45 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:31.838 10:17:45 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:31.838 10:17:45 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:31.838 10:17:45 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:31.838 10:17:45 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.838 10:17:45 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.838 10:17:45 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.838 10:17:45 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.838 10:17:45 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.838 10:17:45 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.838 10:17:45 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.838 10:17:45 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.838 10:17:45 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.838 10:17:45 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.838 10:17:45 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.838 10:17:45 version -- scripts/common.sh@344 -- # case "$op" in 00:06:31.838 10:17:45 version -- scripts/common.sh@345 -- # : 1 00:06:31.838 10:17:45 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.838 10:17:45 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.838 10:17:45 version -- scripts/common.sh@365 -- # decimal 1 00:06:31.838 10:17:45 version -- scripts/common.sh@353 -- # local d=1 00:06:31.838 10:17:45 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.838 10:17:45 version -- scripts/common.sh@355 -- # echo 1 00:06:31.838 10:17:45 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.838 10:17:45 version -- scripts/common.sh@366 -- # decimal 2 00:06:31.839 10:17:45 version -- scripts/common.sh@353 -- # local d=2 00:06:31.839 10:17:45 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.839 10:17:45 version -- scripts/common.sh@355 -- # echo 2 00:06:31.839 10:17:45 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.839 10:17:45 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.839 10:17:45 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.839 10:17:45 version -- scripts/common.sh@368 -- # return 0 00:06:31.839 10:17:45 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.839 10:17:45 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:31.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.839 --rc genhtml_branch_coverage=1 00:06:31.839 --rc genhtml_function_coverage=1 00:06:31.839 --rc genhtml_legend=1 00:06:31.839 --rc geninfo_all_blocks=1 00:06:31.839 --rc geninfo_unexecuted_blocks=1 00:06:31.839 00:06:31.839 ' 00:06:31.839 10:17:45 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:31.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.839 --rc genhtml_branch_coverage=1 00:06:31.839 --rc genhtml_function_coverage=1 00:06:31.839 --rc genhtml_legend=1 00:06:31.839 --rc geninfo_all_blocks=1 00:06:31.839 --rc geninfo_unexecuted_blocks=1 00:06:31.839 00:06:31.839 ' 00:06:31.839 10:17:45 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:31.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.839 --rc genhtml_branch_coverage=1 00:06:31.839 --rc genhtml_function_coverage=1 00:06:31.839 --rc genhtml_legend=1 00:06:31.839 --rc geninfo_all_blocks=1 00:06:31.839 --rc geninfo_unexecuted_blocks=1 00:06:31.839 00:06:31.839 ' 00:06:31.839 10:17:45 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:31.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.839 --rc genhtml_branch_coverage=1 00:06:31.839 --rc genhtml_function_coverage=1 00:06:31.839 --rc genhtml_legend=1 00:06:31.839 --rc geninfo_all_blocks=1 00:06:31.839 --rc geninfo_unexecuted_blocks=1 00:06:31.839 00:06:31.839 ' 00:06:31.839 10:17:45 version -- app/version.sh@17 -- # get_header_version major 00:06:31.839 10:17:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.839 10:17:45 version -- app/version.sh@14 -- # cut -f2 00:06:31.839 10:17:45 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.839 10:17:45 version -- app/version.sh@17 -- # major=25 00:06:31.839 10:17:45 version -- app/version.sh@18 -- # get_header_version minor 00:06:31.839 10:17:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.839 10:17:45 version -- app/version.sh@14 -- # cut -f2 00:06:31.839 10:17:45 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.839 10:17:45 version -- app/version.sh@18 -- # minor=1 00:06:31.839 10:17:45 version -- app/version.sh@19 -- # get_header_version patch 00:06:31.839 10:17:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.839 10:17:45 version -- app/version.sh@14 -- # cut -f2 00:06:31.839 10:17:45 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.839 10:17:45 version -- app/version.sh@19 -- # patch=0 00:06:31.839 10:17:45 version -- app/version.sh@20 -- # get_header_version suffix 00:06:31.839 10:17:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:31.839 10:17:45 version -- app/version.sh@14 -- # cut -f2 00:06:31.839 10:17:45 version -- app/version.sh@14 -- # tr -d '"' 00:06:31.839 10:17:45 version -- app/version.sh@20 -- # suffix=-pre 00:06:31.839 10:17:45 version -- app/version.sh@22 -- # version=25.1 00:06:31.839 10:17:45 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:31.839 10:17:45 version -- app/version.sh@28 -- # version=25.1rc0 00:06:31.839 10:17:45 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:31.839 10:17:45 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:32.098 10:17:45 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:32.098 10:17:45 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:32.098 ************************************ 00:06:32.098 END TEST version 00:06:32.098 ************************************ 00:06:32.098 00:06:32.098 real 0m0.315s 00:06:32.098 user 0m0.180s 00:06:32.098 sys 0m0.189s 00:06:32.098 10:17:45 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.098 10:17:45 version -- common/autotest_common.sh@10 -- # set +x 00:06:32.098 10:17:45 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:32.098 10:17:45 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:32.098 10:17:45 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:32.098 10:17:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.098 10:17:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.098 10:17:45 -- common/autotest_common.sh@10 -- # set +x 00:06:32.098 ************************************ 00:06:32.098 START TEST bdev_raid 00:06:32.098 ************************************ 00:06:32.098 10:17:45 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:32.099 * Looking for test storage... 00:06:32.099 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:32.099 10:17:45 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:32.099 10:17:45 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:06:32.099 10:17:45 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:32.358 10:17:45 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:32.358 10:17:45 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:32.358 10:17:45 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:32.358 10:17:45 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:32.358 10:17:45 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:32.358 10:17:45 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:32.358 10:17:45 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:32.358 10:17:45 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:32.359 10:17:45 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:32.359 10:17:45 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:32.359 10:17:45 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:32.359 10:17:45 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:32.359 10:17:45 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:32.359 10:17:45 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:32.359 10:17:45 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:32.359 10:17:45 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:32.359 10:17:45 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:32.359 10:17:45 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:32.359 10:17:45 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:32.359 10:17:45 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:32.359 10:17:45 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:32.359 10:17:45 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:32.359 10:17:45 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:32.359 10:17:45 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:32.359 10:17:45 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:32.359 10:17:45 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:32.359 10:17:45 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:32.359 10:17:45 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:32.359 10:17:45 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:32.359 10:17:45 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:32.359 10:17:45 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:32.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.359 --rc genhtml_branch_coverage=1 00:06:32.359 --rc genhtml_function_coverage=1 00:06:32.359 --rc genhtml_legend=1 00:06:32.359 --rc geninfo_all_blocks=1 00:06:32.359 --rc geninfo_unexecuted_blocks=1 00:06:32.359 00:06:32.359 ' 00:06:32.359 10:17:45 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:32.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.359 --rc genhtml_branch_coverage=1 00:06:32.359 --rc genhtml_function_coverage=1 00:06:32.359 --rc genhtml_legend=1 00:06:32.359 --rc geninfo_all_blocks=1 00:06:32.359 --rc geninfo_unexecuted_blocks=1 00:06:32.359 00:06:32.359 ' 00:06:32.359 10:17:45 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:32.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.359 --rc genhtml_branch_coverage=1 00:06:32.359 --rc genhtml_function_coverage=1 00:06:32.359 --rc genhtml_legend=1 00:06:32.359 --rc geninfo_all_blocks=1 00:06:32.359 --rc geninfo_unexecuted_blocks=1 00:06:32.359 00:06:32.359 ' 00:06:32.359 10:17:45 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:32.359 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:32.359 --rc genhtml_branch_coverage=1 00:06:32.359 --rc genhtml_function_coverage=1 00:06:32.359 --rc genhtml_legend=1 00:06:32.359 --rc geninfo_all_blocks=1 00:06:32.359 --rc geninfo_unexecuted_blocks=1 00:06:32.359 00:06:32.359 ' 00:06:32.359 10:17:45 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:32.359 10:17:45 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:32.359 10:17:45 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:32.359 10:17:45 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:32.359 10:17:45 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:32.359 10:17:45 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:32.359 10:17:45 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:32.359 10:17:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:32.359 10:17:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.359 10:17:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:32.359 ************************************ 00:06:32.359 START TEST raid1_resize_data_offset_test 00:06:32.359 ************************************ 00:06:32.359 Process raid pid: 59931 00:06:32.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.359 10:17:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:06:32.359 10:17:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59931 00:06:32.359 10:17:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59931' 00:06:32.359 10:17:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59931 00:06:32.359 10:17:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:32.359 10:17:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 59931 ']' 00:06:32.359 10:17:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.359 10:17:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.359 10:17:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.359 10:17:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.359 10:17:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:32.359 [2024-11-19 10:17:46.035735] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:32.359 [2024-11-19 10:17:46.036407] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:32.619 [2024-11-19 10:17:46.210140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.619 [2024-11-19 10:17:46.317338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.878 [2024-11-19 10:17:46.504545] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:32.878 [2024-11-19 10:17:46.504657] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:33.137 10:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.137 10:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:06:33.137 10:17:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:33.137 10:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.137 10:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.137 malloc0 00:06:33.137 10:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.137 10:17:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:33.137 10:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.137 10:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.396 malloc1 00:06:33.396 10:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.396 10:17:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:33.396 10:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.396 10:17:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.396 null0 00:06:33.396 10:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.396 10:17:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:33.396 10:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.396 10:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.396 [2024-11-19 10:17:47.012758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:33.397 [2024-11-19 10:17:47.014528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:33.397 [2024-11-19 10:17:47.014612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:33.397 [2024-11-19 10:17:47.014812] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:33.397 [2024-11-19 10:17:47.014862] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:33.397 [2024-11-19 10:17:47.015155] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:33.397 [2024-11-19 10:17:47.015363] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:33.397 [2024-11-19 10:17:47.015409] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:33.397 [2024-11-19 10:17:47.015581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:33.397 10:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.397 10:17:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:33.397 10:17:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:33.397 10:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.397 10:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.397 10:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.397 10:17:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:33.397 10:17:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:33.397 10:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.397 10:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.397 [2024-11-19 10:17:47.072629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:33.397 10:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.397 10:17:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:33.397 10:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.397 10:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.966 malloc2 00:06:33.966 10:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.966 10:17:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:33.966 10:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.966 10:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.966 [2024-11-19 10:17:47.594092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:33.966 [2024-11-19 10:17:47.610361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:33.966 10:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.966 [2024-11-19 10:17:47.612112] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:33.966 10:17:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:33.966 10:17:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:33.966 10:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.966 10:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.966 10:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.966 10:17:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:33.966 10:17:47 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59931 00:06:33.966 10:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 59931 ']' 00:06:33.966 10:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 59931 00:06:33.966 10:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:06:33.966 10:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.966 10:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59931 00:06:33.966 10:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.966 10:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.966 10:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59931' 00:06:33.966 killing process with pid 59931 00:06:33.966 10:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 59931 00:06:33.966 10:17:47 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 59931 00:06:33.966 [2024-11-19 10:17:47.693200] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:33.966 [2024-11-19 10:17:47.694809] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:33.966 [2024-11-19 10:17:47.694913] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:33.966 [2024-11-19 10:17:47.694954] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:33.966 [2024-11-19 10:17:47.728027] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:33.966 [2024-11-19 10:17:47.728383] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:33.966 [2024-11-19 10:17:47.728443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:35.923 [2024-11-19 10:17:49.409630] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:36.861 10:17:50 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:36.861 00:06:36.861 real 0m4.516s 00:06:36.861 user 0m4.445s 00:06:36.861 sys 0m0.480s 00:06:36.861 10:17:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.861 10:17:50 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.862 ************************************ 00:06:36.862 END TEST raid1_resize_data_offset_test 00:06:36.862 ************************************ 00:06:36.862 10:17:50 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:36.862 10:17:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:36.862 10:17:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.862 10:17:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:36.862 ************************************ 00:06:36.862 START TEST raid0_resize_superblock_test 00:06:36.862 ************************************ 00:06:36.862 10:17:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:06:36.862 10:17:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:36.862 10:17:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60015 00:06:36.862 Process raid pid: 60015 00:06:36.862 10:17:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:36.862 10:17:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60015' 00:06:36.862 10:17:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60015 00:06:36.862 10:17:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60015 ']' 00:06:36.862 10:17:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.862 10:17:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.862 10:17:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.862 10:17:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.862 10:17:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.862 [2024-11-19 10:17:50.622237] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:36.862 [2024-11-19 10:17:50.622750] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:37.121 [2024-11-19 10:17:50.798035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.381 [2024-11-19 10:17:50.904375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.381 [2024-11-19 10:17:51.101624] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:37.381 [2024-11-19 10:17:51.101745] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:37.950 10:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.950 10:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:37.950 10:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:37.950 10:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.950 10:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.210 malloc0 00:06:38.210 10:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.210 10:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:38.210 10:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.210 10:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.210 [2024-11-19 10:17:51.951472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:38.210 [2024-11-19 10:17:51.951537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:38.210 [2024-11-19 10:17:51.951560] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:38.210 [2024-11-19 10:17:51.951571] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:38.210 [2024-11-19 10:17:51.953559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:38.210 [2024-11-19 10:17:51.953599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:38.210 pt0 00:06:38.210 10:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.210 10:17:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:38.210 10:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.210 10:17:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.470 82d159f4-984b-47db-80ea-baa2667e8661 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.470 fd48d4a4-18cc-4701-8818-cc917617c16f 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.470 e594a7f0-1683-4abf-9987-da36a9d63233 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.470 [2024-11-19 10:17:52.082937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev fd48d4a4-18cc-4701-8818-cc917617c16f is claimed 00:06:38.470 [2024-11-19 10:17:52.083035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e594a7f0-1683-4abf-9987-da36a9d63233 is claimed 00:06:38.470 [2024-11-19 10:17:52.083193] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:38.470 [2024-11-19 10:17:52.083210] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:38.470 [2024-11-19 10:17:52.083450] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:38.470 [2024-11-19 10:17:52.083642] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:38.470 [2024-11-19 10:17:52.083652] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:38.470 [2024-11-19 10:17:52.083790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.470 [2024-11-19 10:17:52.198910] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:38.470 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.471 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:38.471 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:38.471 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:38.471 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:38.471 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.471 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.471 [2024-11-19 10:17:52.242781] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:38.471 [2024-11-19 10:17:52.242804] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'fd48d4a4-18cc-4701-8818-cc917617c16f' was resized: old size 131072, new size 204800 00:06:38.471 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.471 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:38.471 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.471 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.731 [2024-11-19 10:17:52.254713] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:38.731 [2024-11-19 10:17:52.254733] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'e594a7f0-1683-4abf-9987-da36a9d63233' was resized: old size 131072, new size 204800 00:06:38.731 [2024-11-19 10:17:52.254762] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:38.731 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.731 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:38.731 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.731 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:38.731 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.731 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.731 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:38.731 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:38.731 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:38.731 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.731 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.731 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.731 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:38.731 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:38.731 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:38.731 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:38.731 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:38.731 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.731 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.731 [2024-11-19 10:17:52.370599] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:38.731 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.731 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:38.731 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:38.731 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:38.731 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:38.731 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.731 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.731 [2024-11-19 10:17:52.410345] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:38.731 [2024-11-19 10:17:52.410450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:38.731 [2024-11-19 10:17:52.410486] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:38.731 [2024-11-19 10:17:52.410522] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:38.731 [2024-11-19 10:17:52.410657] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:38.731 [2024-11-19 10:17:52.410728] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:38.731 [2024-11-19 10:17:52.410776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:38.731 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.731 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:38.731 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.731 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.731 [2024-11-19 10:17:52.422282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:38.731 [2024-11-19 10:17:52.422374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:38.731 [2024-11-19 10:17:52.422420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:38.731 [2024-11-19 10:17:52.422455] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:38.731 [2024-11-19 10:17:52.424560] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:38.731 [2024-11-19 10:17:52.424633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:38.732 [2024-11-19 10:17:52.426219] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev fd48d4a4-18cc-4701-8818-cc917617c16f 00:06:38.732 [2024-11-19 10:17:52.426321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev fd48d4a4-18cc-4701-8818-cc917617c16f is claimed 00:06:38.732 [2024-11-19 10:17:52.426444] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev e594a7f0-1683-4abf-9987-da36a9d63233 00:06:38.732 [2024-11-19 10:17:52.426465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e594a7f0-1683-4abf-9987-da36a9d63233 is claimed 00:06:38.732 [2024-11-19 10:17:52.426587] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev e594a7f0-1683-4abf-9987-da36a9d63233 (2) smaller than existing raid bdev Raid (3) 00:06:38.732 [2024-11-19 10:17:52.426607] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev fd48d4a4-18cc-4701-8818-cc917617c16f: File exists 00:06:38.732 [2024-11-19 10:17:52.426645] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:38.732 [2024-11-19 10:17:52.426656] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:38.732 [2024-11-19 10:17:52.426879] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:38.732 [2024-11-19 10:17:52.427032] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:38.732 [2024-11-19 10:17:52.427047] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:38.732 [2024-11-19 10:17:52.427224] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:38.732 pt0 00:06:38.732 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.732 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:38.732 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.732 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.732 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.732 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:38.732 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:38.732 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:38.732 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:38.732 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.732 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.732 [2024-11-19 10:17:52.450816] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:38.732 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.732 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:38.732 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:38.732 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:38.732 10:17:52 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60015 00:06:38.732 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60015 ']' 00:06:38.732 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60015 00:06:38.732 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:38.732 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.732 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60015 00:06:38.992 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:38.992 killing process with pid 60015 00:06:38.992 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:38.992 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60015' 00:06:38.992 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60015 00:06:38.992 [2024-11-19 10:17:52.530823] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:38.992 [2024-11-19 10:17:52.530879] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:38.992 [2024-11-19 10:17:52.530915] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:38.992 [2024-11-19 10:17:52.530923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:38.992 10:17:52 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60015 00:06:40.372 [2024-11-19 10:17:53.902676] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:41.310 ************************************ 00:06:41.310 END TEST raid0_resize_superblock_test 00:06:41.310 ************************************ 00:06:41.310 10:17:54 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:41.310 00:06:41.310 real 0m4.441s 00:06:41.310 user 0m4.664s 00:06:41.310 sys 0m0.563s 00:06:41.310 10:17:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.310 10:17:54 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.310 10:17:55 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:41.310 10:17:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:41.310 10:17:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:41.310 10:17:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:41.310 ************************************ 00:06:41.310 START TEST raid1_resize_superblock_test 00:06:41.310 ************************************ 00:06:41.310 10:17:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:06:41.310 10:17:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:41.310 10:17:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60116 00:06:41.310 10:17:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:41.310 10:17:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60116' 00:06:41.310 Process raid pid: 60116 00:06:41.310 10:17:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60116 00:06:41.310 10:17:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60116 ']' 00:06:41.310 10:17:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.310 10:17:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.310 10:17:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.310 10:17:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.310 10:17:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:41.570 [2024-11-19 10:17:55.128081] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:41.570 [2024-11-19 10:17:55.128271] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:41.570 [2024-11-19 10:17:55.287230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.868 [2024-11-19 10:17:55.395760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.868 [2024-11-19 10:17:55.595196] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:41.868 [2024-11-19 10:17:55.595282] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:42.438 10:17:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.438 10:17:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:42.438 10:17:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:42.438 10:17:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.438 10:17:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.698 malloc0 00:06:42.698 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.698 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:42.698 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.698 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.698 [2024-11-19 10:17:56.426903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:42.698 [2024-11-19 10:17:56.426969] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:42.698 [2024-11-19 10:17:56.427008] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:42.698 [2024-11-19 10:17:56.427022] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:42.698 [2024-11-19 10:17:56.429058] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:42.698 [2024-11-19 10:17:56.429095] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:42.698 pt0 00:06:42.698 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.698 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:42.698 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.698 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.958 cb38f3ac-e2a7-48ff-8246-252b90c34960 00:06:42.958 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.959 b029a1c3-ac35-40b4-8996-973662a30429 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.959 991e8353-11eb-4af1-b185-354a84eef211 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.959 [2024-11-19 10:17:56.558410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b029a1c3-ac35-40b4-8996-973662a30429 is claimed 00:06:42.959 [2024-11-19 10:17:56.558492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 991e8353-11eb-4af1-b185-354a84eef211 is claimed 00:06:42.959 [2024-11-19 10:17:56.558610] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:42.959 [2024-11-19 10:17:56.558624] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:42.959 [2024-11-19 10:17:56.558868] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:42.959 [2024-11-19 10:17:56.559081] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:42.959 [2024-11-19 10:17:56.559094] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:42.959 [2024-11-19 10:17:56.559237] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.959 [2024-11-19 10:17:56.670408] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.959 [2024-11-19 10:17:56.714258] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:42.959 [2024-11-19 10:17:56.714280] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'b029a1c3-ac35-40b4-8996-973662a30429' was resized: old size 131072, new size 204800 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:42.959 [2024-11-19 10:17:56.726198] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:42.959 [2024-11-19 10:17:56.726218] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '991e8353-11eb-4af1-b185-354a84eef211' was resized: old size 131072, new size 204800 00:06:42.959 [2024-11-19 10:17:56.726244] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.959 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.220 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.220 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:43.220 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:43.220 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:43.220 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.220 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.220 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.220 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:43.220 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:43.220 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:43.220 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.220 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.220 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:43.220 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:43.220 [2024-11-19 10:17:56.838098] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:43.220 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.220 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:43.220 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:43.220 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:43.220 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:43.220 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.220 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.220 [2024-11-19 10:17:56.885811] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:43.220 [2024-11-19 10:17:56.885915] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:43.220 [2024-11-19 10:17:56.885943] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:43.220 [2024-11-19 10:17:56.886073] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:43.220 [2024-11-19 10:17:56.886225] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:43.220 [2024-11-19 10:17:56.886285] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:43.220 [2024-11-19 10:17:56.886296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:43.220 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.220 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:43.220 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.220 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.220 [2024-11-19 10:17:56.897748] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:43.220 [2024-11-19 10:17:56.897798] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:43.220 [2024-11-19 10:17:56.897817] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:43.220 [2024-11-19 10:17:56.897829] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:43.220 [2024-11-19 10:17:56.899840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:43.220 [2024-11-19 10:17:56.899877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:43.220 [2024-11-19 10:17:56.901385] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev b029a1c3-ac35-40b4-8996-973662a30429 00:06:43.220 [2024-11-19 10:17:56.901448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev b029a1c3-ac35-40b4-8996-973662a30429 is claimed 00:06:43.220 [2024-11-19 10:17:56.901553] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 991e8353-11eb-4af1-b185-354a84eef211 00:06:43.221 [2024-11-19 10:17:56.901571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 991e8353-11eb-4af1-b185-354a84eef211 is claimed 00:06:43.221 [2024-11-19 10:17:56.901690] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 991e8353-11eb-4af1-b185-354a84eef211 (2) smaller than existing raid bdev Raid (3) 00:06:43.221 [2024-11-19 10:17:56.901707] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev b029a1c3-ac35-40b4-8996-973662a30429: File exists 00:06:43.221 [2024-11-19 10:17:56.901746] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:43.221 [2024-11-19 10:17:56.901756] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:43.221 [2024-11-19 10:17:56.901973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:43.221 [2024-11-19 10:17:56.902149] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:43.221 [2024-11-19 10:17:56.902163] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:43.221 [2024-11-19 10:17:56.902331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:43.221 pt0 00:06:43.221 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.221 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:43.221 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.221 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.221 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.221 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:43.221 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:43.221 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.221 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:43.221 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:43.221 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:43.221 [2024-11-19 10:17:56.926423] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:43.221 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.221 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:43.221 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:43.221 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:43.221 10:17:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60116 00:06:43.221 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60116 ']' 00:06:43.221 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60116 00:06:43.221 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:43.221 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.221 10:17:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60116 00:06:43.480 killing process with pid 60116 00:06:43.480 10:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.480 10:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.480 10:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60116' 00:06:43.480 10:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60116 00:06:43.480 [2024-11-19 10:17:57.009121] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:43.480 [2024-11-19 10:17:57.009181] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:43.480 [2024-11-19 10:17:57.009227] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:43.480 [2024-11-19 10:17:57.009236] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:43.480 10:17:57 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60116 00:06:44.861 [2024-11-19 10:17:58.364472] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:45.797 10:17:59 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:45.797 00:06:45.797 real 0m4.350s 00:06:45.797 user 0m4.602s 00:06:45.797 sys 0m0.515s 00:06:45.797 10:17:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:45.797 ************************************ 00:06:45.797 END TEST raid1_resize_superblock_test 00:06:45.797 ************************************ 00:06:45.797 10:17:59 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:45.797 10:17:59 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:45.797 10:17:59 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:45.797 10:17:59 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:45.797 10:17:59 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:45.797 10:17:59 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:45.797 10:17:59 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:45.797 10:17:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:45.797 10:17:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:45.797 10:17:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:45.797 ************************************ 00:06:45.797 START TEST raid_function_test_raid0 00:06:45.797 ************************************ 00:06:45.797 10:17:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:06:45.797 10:17:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:45.797 10:17:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:45.797 10:17:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:45.797 10:17:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60213 00:06:45.797 10:17:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:45.797 10:17:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60213' 00:06:45.797 Process raid pid: 60213 00:06:45.797 10:17:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60213 00:06:45.797 10:17:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60213 ']' 00:06:45.797 10:17:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.797 10:17:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.797 10:17:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.797 10:17:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.797 10:17:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:45.797 [2024-11-19 10:17:59.566583] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:45.797 [2024-11-19 10:17:59.566776] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.057 [2024-11-19 10:17:59.740113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.317 [2024-11-19 10:17:59.846250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.317 [2024-11-19 10:18:00.035100] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:46.317 [2024-11-19 10:18:00.035210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:46.886 10:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.886 10:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:06:46.886 10:18:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:46.886 10:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.886 10:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:46.886 Base_1 00:06:46.886 10:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.886 10:18:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:46.886 10:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.886 10:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:46.886 Base_2 00:06:46.886 10:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.886 10:18:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:46.886 10:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.886 10:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:46.886 [2024-11-19 10:18:00.469133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:46.886 [2024-11-19 10:18:00.470813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:46.886 [2024-11-19 10:18:00.470874] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:46.886 [2024-11-19 10:18:00.470886] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:46.887 [2024-11-19 10:18:00.471156] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:46.887 [2024-11-19 10:18:00.471292] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:46.887 [2024-11-19 10:18:00.471301] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:46.887 [2024-11-19 10:18:00.471427] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:46.887 10:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.887 10:18:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:46.887 10:18:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:46.887 10:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.887 10:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:46.887 10:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.887 10:18:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:46.887 10:18:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:46.887 10:18:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:46.887 10:18:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:46.887 10:18:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:46.887 10:18:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:46.887 10:18:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:46.887 10:18:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:46.887 10:18:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:46.887 10:18:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:46.887 10:18:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:46.887 10:18:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:47.147 [2024-11-19 10:18:00.684774] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:47.147 /dev/nbd0 00:06:47.147 10:18:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:47.147 10:18:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:47.147 10:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:47.147 10:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:06:47.147 10:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:47.147 10:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:47.147 10:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:47.148 10:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:06:47.148 10:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:47.148 10:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:47.148 10:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:47.148 1+0 records in 00:06:47.148 1+0 records out 00:06:47.148 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514443 s, 8.0 MB/s 00:06:47.148 10:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.148 10:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:06:47.148 10:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.148 10:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:47.148 10:18:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:06:47.148 10:18:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.148 10:18:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:47.148 10:18:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:47.148 10:18:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:47.148 10:18:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:47.408 10:18:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:47.408 { 00:06:47.408 "nbd_device": "/dev/nbd0", 00:06:47.408 "bdev_name": "raid" 00:06:47.408 } 00:06:47.408 ]' 00:06:47.408 10:18:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:47.408 { 00:06:47.408 "nbd_device": "/dev/nbd0", 00:06:47.408 "bdev_name": "raid" 00:06:47.408 } 00:06:47.408 ]' 00:06:47.408 10:18:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.408 10:18:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:47.408 10:18:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:47.408 10:18:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.408 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:47.408 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:47.408 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:47.408 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:47.408 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:47.408 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:47.408 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:47.408 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:47.408 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:47.408 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:47.408 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:47.408 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:47.408 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:47.408 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:47.408 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:47.408 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:47.408 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:47.408 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:47.408 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:47.408 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:47.408 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:47.408 4096+0 records in 00:06:47.408 4096+0 records out 00:06:47.408 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0348355 s, 60.2 MB/s 00:06:47.408 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:47.668 4096+0 records in 00:06:47.668 4096+0 records out 00:06:47.668 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.184587 s, 11.4 MB/s 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:47.668 128+0 records in 00:06:47.668 128+0 records out 00:06:47.668 65536 bytes (66 kB, 64 KiB) copied, 0.00114216 s, 57.4 MB/s 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:47.668 2035+0 records in 00:06:47.668 2035+0 records out 00:06:47.668 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0162362 s, 64.2 MB/s 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:47.668 456+0 records in 00:06:47.668 456+0 records out 00:06:47.668 233472 bytes (233 kB, 228 KiB) copied, 0.00380951 s, 61.3 MB/s 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.668 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:47.928 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:47.928 [2024-11-19 10:18:01.564455] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:47.928 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:47.928 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:47.928 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.928 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.928 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:47.928 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:47.928 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.928 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:47.928 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:47.928 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:48.187 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:48.187 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:48.187 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:48.187 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:48.187 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:48.187 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:48.187 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:48.187 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:48.187 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:48.187 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:48.187 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:48.187 10:18:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60213 00:06:48.187 10:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60213 ']' 00:06:48.187 10:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60213 00:06:48.187 10:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:06:48.187 10:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:48.187 10:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60213 00:06:48.187 killing process with pid 60213 00:06:48.187 10:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:48.187 10:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:48.188 10:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60213' 00:06:48.188 10:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60213 00:06:48.188 [2024-11-19 10:18:01.880188] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:48.188 [2024-11-19 10:18:01.880288] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:48.188 [2024-11-19 10:18:01.880336] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:48.188 [2024-11-19 10:18:01.880351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:48.188 10:18:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60213 00:06:48.447 [2024-11-19 10:18:02.074282] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:49.390 10:18:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:49.390 00:06:49.390 real 0m3.620s 00:06:49.390 user 0m4.224s 00:06:49.390 sys 0m0.868s 00:06:49.390 10:18:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.390 10:18:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:49.390 ************************************ 00:06:49.390 END TEST raid_function_test_raid0 00:06:49.390 ************************************ 00:06:49.390 10:18:03 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:49.390 10:18:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:49.390 10:18:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.390 10:18:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:49.650 ************************************ 00:06:49.650 START TEST raid_function_test_concat 00:06:49.650 ************************************ 00:06:49.650 10:18:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:06:49.650 10:18:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:49.650 10:18:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:49.650 10:18:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:49.650 10:18:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60337 00:06:49.650 10:18:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:49.650 10:18:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60337' 00:06:49.650 Process raid pid: 60337 00:06:49.650 10:18:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60337 00:06:49.650 10:18:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60337 ']' 00:06:49.650 10:18:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.650 10:18:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.650 10:18:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.650 10:18:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.650 10:18:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:49.650 [2024-11-19 10:18:03.262656] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:49.650 [2024-11-19 10:18:03.262787] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:49.910 [2024-11-19 10:18:03.434672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.910 [2024-11-19 10:18:03.540468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.170 [2024-11-19 10:18:03.734049] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:50.170 [2024-11-19 10:18:03.734080] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:50.430 10:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:50.430 10:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:06:50.430 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:50.430 10:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.430 10:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:50.430 Base_1 00:06:50.430 10:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.430 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:50.430 10:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.430 10:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:50.430 Base_2 00:06:50.430 10:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.430 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:06:50.430 10:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.430 10:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:50.430 [2024-11-19 10:18:04.158083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:50.430 [2024-11-19 10:18:04.159803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:50.430 [2024-11-19 10:18:04.159888] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:50.430 [2024-11-19 10:18:04.159899] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:50.430 [2024-11-19 10:18:04.160163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:50.430 [2024-11-19 10:18:04.160316] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:50.430 [2024-11-19 10:18:04.160326] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:50.430 [2024-11-19 10:18:04.160469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:50.430 10:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.430 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:50.430 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:50.430 10:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:50.430 10:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:50.430 10:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:50.690 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:50.690 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:50.690 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:50.690 10:18:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:50.690 10:18:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:50.690 10:18:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:50.690 10:18:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:50.690 10:18:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:50.690 10:18:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:50.690 10:18:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:50.690 10:18:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:50.690 10:18:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:50.690 [2024-11-19 10:18:04.393707] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:50.690 /dev/nbd0 00:06:50.690 10:18:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:50.690 10:18:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:50.690 10:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:50.690 10:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:06:50.690 10:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:50.690 10:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:50.690 10:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:50.690 10:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:06:50.690 10:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:50.690 10:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:50.690 10:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:50.690 1+0 records in 00:06:50.690 1+0 records out 00:06:50.690 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392881 s, 10.4 MB/s 00:06:50.690 10:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.690 10:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:06:50.690 10:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.951 10:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:50.951 10:18:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:06:50.951 10:18:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.951 10:18:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:50.951 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:50.951 10:18:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:50.951 10:18:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:50.951 10:18:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:50.951 { 00:06:50.951 "nbd_device": "/dev/nbd0", 00:06:50.951 "bdev_name": "raid" 00:06:50.951 } 00:06:50.951 ]' 00:06:50.951 10:18:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:50.951 { 00:06:50.951 "nbd_device": "/dev/nbd0", 00:06:50.951 "bdev_name": "raid" 00:06:50.951 } 00:06:50.951 ]' 00:06:50.951 10:18:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.951 10:18:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:50.951 10:18:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:50.951 10:18:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.951 10:18:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:50.951 10:18:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:50.951 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:06:50.951 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:50.951 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:50.951 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:50.951 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:50.951 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:51.211 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:51.211 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:51.211 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:51.211 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:51.211 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:51.211 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:51.211 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:51.211 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:51.211 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:51.211 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:51.211 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:51.211 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:51.211 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:51.211 4096+0 records in 00:06:51.211 4096+0 records out 00:06:51.211 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0249824 s, 83.9 MB/s 00:06:51.211 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:51.211 4096+0 records in 00:06:51.211 4096+0 records out 00:06:51.211 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.186565 s, 11.2 MB/s 00:06:51.211 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:51.211 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:51.211 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:51.211 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:51.211 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:51.211 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:51.211 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:51.211 128+0 records in 00:06:51.211 128+0 records out 00:06:51.211 65536 bytes (66 kB, 64 KiB) copied, 0.00112184 s, 58.4 MB/s 00:06:51.211 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:51.211 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:51.472 10:18:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:51.472 10:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:51.472 10:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:51.472 10:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:51.472 10:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:51.472 10:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:51.472 2035+0 records in 00:06:51.472 2035+0 records out 00:06:51.472 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0121899 s, 85.5 MB/s 00:06:51.472 10:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:51.472 10:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:51.472 10:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:51.472 10:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:51.472 10:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:51.472 10:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:51.472 10:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:51.472 10:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:51.472 456+0 records in 00:06:51.472 456+0 records out 00:06:51.472 233472 bytes (233 kB, 228 KiB) copied, 0.00356296 s, 65.5 MB/s 00:06:51.472 10:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:51.472 10:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:51.472 10:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:51.472 10:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:51.472 10:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:51.472 10:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:06:51.472 10:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:51.472 10:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:51.472 10:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:51.472 10:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:51.472 10:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:51.472 10:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.472 10:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:51.732 10:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:51.732 10:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:51.732 10:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:51.732 10:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.732 10:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.732 10:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:51.732 [2024-11-19 10:18:05.287492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:51.732 10:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:51.732 10:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.732 10:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:51.732 10:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:51.732 10:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:51.732 10:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:51.732 10:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.732 10:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:51.993 10:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:51.993 10:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:51.993 10:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.993 10:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:51.993 10:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:51.993 10:18:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:51.993 10:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:06:51.993 10:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:51.993 10:18:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60337 00:06:51.993 10:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60337 ']' 00:06:51.993 10:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60337 00:06:51.993 10:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:06:51.993 10:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.993 10:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60337 00:06:51.993 10:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.993 10:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.993 10:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60337' 00:06:51.993 killing process with pid 60337 00:06:51.993 10:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60337 00:06:51.993 [2024-11-19 10:18:05.574307] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:51.993 [2024-11-19 10:18:05.574409] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:51.993 [2024-11-19 10:18:05.574461] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:51.993 [2024-11-19 10:18:05.574475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:51.993 10:18:05 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60337 00:06:51.993 [2024-11-19 10:18:05.768799] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:53.376 10:18:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:06:53.376 00:06:53.376 real 0m3.624s 00:06:53.376 user 0m4.212s 00:06:53.376 sys 0m0.891s 00:06:53.376 10:18:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:53.376 10:18:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:53.376 ************************************ 00:06:53.376 END TEST raid_function_test_concat 00:06:53.376 ************************************ 00:06:53.376 10:18:06 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:06:53.376 10:18:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:53.376 10:18:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:53.376 10:18:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:53.376 ************************************ 00:06:53.376 START TEST raid0_resize_test 00:06:53.376 ************************************ 00:06:53.376 10:18:06 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:06:53.376 10:18:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:06:53.376 10:18:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:53.376 10:18:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:53.376 10:18:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:53.376 10:18:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:53.376 10:18:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:53.376 10:18:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:53.376 10:18:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:53.376 10:18:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60464 00:06:53.376 10:18:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:53.376 10:18:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60464' 00:06:53.376 Process raid pid: 60464 00:06:53.376 10:18:06 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60464 00:06:53.376 10:18:06 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60464 ']' 00:06:53.376 10:18:06 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.376 10:18:06 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.376 10:18:06 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.376 10:18:06 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.376 10:18:06 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.377 [2024-11-19 10:18:06.960456] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:53.377 [2024-11-19 10:18:06.960595] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.377 [2024-11-19 10:18:07.132079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.636 [2024-11-19 10:18:07.235277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.895 [2024-11-19 10:18:07.422451] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:53.895 [2024-11-19 10:18:07.422490] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.154 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.154 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:54.154 10:18:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:54.154 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.154 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.154 Base_1 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.155 Base_2 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.155 [2024-11-19 10:18:07.815391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:54.155 [2024-11-19 10:18:07.817118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:54.155 [2024-11-19 10:18:07.817172] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:54.155 [2024-11-19 10:18:07.817184] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:54.155 [2024-11-19 10:18:07.817416] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:54.155 [2024-11-19 10:18:07.817561] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:54.155 [2024-11-19 10:18:07.817570] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:54.155 [2024-11-19 10:18:07.817701] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.155 [2024-11-19 10:18:07.827346] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:54.155 [2024-11-19 10:18:07.827377] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:54.155 true 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.155 [2024-11-19 10:18:07.843492] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.155 [2024-11-19 10:18:07.887223] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:54.155 [2024-11-19 10:18:07.887249] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:54.155 [2024-11-19 10:18:07.887274] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:54.155 true 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.155 [2024-11-19 10:18:07.903376] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60464 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60464 ']' 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60464 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.155 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60464 00:06:54.414 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.415 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.415 killing process with pid 60464 00:06:54.415 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60464' 00:06:54.415 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60464 00:06:54.415 [2024-11-19 10:18:07.951413] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:54.415 [2024-11-19 10:18:07.951481] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:54.415 [2024-11-19 10:18:07.951522] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:54.415 [2024-11-19 10:18:07.951531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:54.415 10:18:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60464 00:06:54.415 [2024-11-19 10:18:07.968516] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:55.355 10:18:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:55.355 00:06:55.355 real 0m2.128s 00:06:55.355 user 0m2.247s 00:06:55.355 sys 0m0.320s 00:06:55.355 10:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.355 10:18:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.355 ************************************ 00:06:55.355 END TEST raid0_resize_test 00:06:55.355 ************************************ 00:06:55.355 10:18:09 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:06:55.355 10:18:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:55.355 10:18:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.355 10:18:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:55.355 ************************************ 00:06:55.355 START TEST raid1_resize_test 00:06:55.355 ************************************ 00:06:55.355 10:18:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:06:55.355 10:18:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:06:55.355 10:18:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:55.355 10:18:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:55.355 10:18:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:55.355 10:18:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:55.355 10:18:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:55.355 10:18:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:55.355 10:18:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:55.355 10:18:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60520 00:06:55.355 Process raid pid: 60520 00:06:55.355 10:18:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60520' 00:06:55.355 10:18:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:55.355 10:18:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60520 00:06:55.355 10:18:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60520 ']' 00:06:55.355 10:18:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.355 10:18:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.355 10:18:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.355 10:18:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.355 10:18:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.625 [2024-11-19 10:18:09.160150] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:55.626 [2024-11-19 10:18:09.160273] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.626 [2024-11-19 10:18:09.332386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.888 [2024-11-19 10:18:09.440076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.888 [2024-11-19 10:18:09.628732] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:55.888 [2024-11-19 10:18:09.628768] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.463 10:18:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.463 10:18:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:56.463 10:18:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:56.463 10:18:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.463 10:18:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.463 Base_1 00:06:56.463 10:18:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.463 10:18:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:56.463 10:18:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.463 10:18:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.463 Base_2 00:06:56.463 10:18:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.463 10:18:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:06:56.463 10:18:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:56.463 10:18:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.463 10:18:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.463 [2024-11-19 10:18:10.024361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:56.464 [2024-11-19 10:18:10.026051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:56.464 [2024-11-19 10:18:10.026108] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:56.464 [2024-11-19 10:18:10.026120] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:56.464 [2024-11-19 10:18:10.026342] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:56.464 [2024-11-19 10:18:10.026490] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:56.464 [2024-11-19 10:18:10.026503] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:56.464 [2024-11-19 10:18:10.026639] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.464 [2024-11-19 10:18:10.036314] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:56.464 [2024-11-19 10:18:10.036346] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:56.464 true 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.464 [2024-11-19 10:18:10.052442] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.464 [2024-11-19 10:18:10.100198] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:56.464 [2024-11-19 10:18:10.100222] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:56.464 [2024-11-19 10:18:10.100246] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:06:56.464 true 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.464 [2024-11-19 10:18:10.116327] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60520 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60520 ']' 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60520 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60520 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.464 killing process with pid 60520 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60520' 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60520 00:06:56.464 [2024-11-19 10:18:10.179512] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:56.464 [2024-11-19 10:18:10.179576] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:56.464 10:18:10 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60520 00:06:56.464 [2024-11-19 10:18:10.180015] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:56.464 [2024-11-19 10:18:10.180038] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:56.464 [2024-11-19 10:18:10.196048] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:57.844 10:18:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:57.844 00:06:57.844 real 0m2.158s 00:06:57.844 user 0m2.286s 00:06:57.844 sys 0m0.321s 00:06:57.844 10:18:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:57.844 10:18:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.844 ************************************ 00:06:57.844 END TEST raid1_resize_test 00:06:57.844 ************************************ 00:06:57.844 10:18:11 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:06:57.844 10:18:11 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:57.844 10:18:11 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:57.844 10:18:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:57.844 10:18:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.844 10:18:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:57.844 ************************************ 00:06:57.844 START TEST raid_state_function_test 00:06:57.844 ************************************ 00:06:57.844 10:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:06:57.844 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:57.844 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:57.844 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:57.845 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:57.845 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:57.845 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:57.845 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:57.845 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:57.845 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:57.845 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:57.845 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:57.845 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:57.845 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:57.845 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:57.845 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:57.845 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:57.845 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:57.845 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:57.845 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:57.845 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:57.845 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:57.845 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:57.845 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:57.845 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60577 00:06:57.845 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60577' 00:06:57.845 Process raid pid: 60577 00:06:57.845 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:57.845 10:18:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60577 00:06:57.845 10:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60577 ']' 00:06:57.845 10:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.845 10:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.845 10:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.845 10:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.845 10:18:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.845 [2024-11-19 10:18:11.395833] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:06:57.845 [2024-11-19 10:18:11.395963] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:57.845 [2024-11-19 10:18:11.571256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.104 [2024-11-19 10:18:11.679663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.104 [2024-11-19 10:18:11.872637] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:58.104 [2024-11-19 10:18:11.872686] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:58.674 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.674 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:06:58.674 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:58.674 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.674 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.674 [2024-11-19 10:18:12.207840] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:58.674 [2024-11-19 10:18:12.207896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:58.674 [2024-11-19 10:18:12.207907] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:58.674 [2024-11-19 10:18:12.207916] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:58.674 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.674 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:58.674 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:58.674 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:58.674 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:58.674 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:58.674 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:58.674 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:58.674 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:58.674 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:58.674 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:58.674 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:58.674 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.674 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.674 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.674 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.674 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:58.674 "name": "Existed_Raid", 00:06:58.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:58.674 "strip_size_kb": 64, 00:06:58.674 "state": "configuring", 00:06:58.674 "raid_level": "raid0", 00:06:58.674 "superblock": false, 00:06:58.674 "num_base_bdevs": 2, 00:06:58.674 "num_base_bdevs_discovered": 0, 00:06:58.674 "num_base_bdevs_operational": 2, 00:06:58.674 "base_bdevs_list": [ 00:06:58.674 { 00:06:58.674 "name": "BaseBdev1", 00:06:58.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:58.674 "is_configured": false, 00:06:58.674 "data_offset": 0, 00:06:58.674 "data_size": 0 00:06:58.674 }, 00:06:58.674 { 00:06:58.674 "name": "BaseBdev2", 00:06:58.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:58.674 "is_configured": false, 00:06:58.674 "data_offset": 0, 00:06:58.674 "data_size": 0 00:06:58.674 } 00:06:58.674 ] 00:06:58.674 }' 00:06:58.674 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:58.674 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.934 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:58.934 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.934 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.934 [2024-11-19 10:18:12.651051] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:58.934 [2024-11-19 10:18:12.651092] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:58.934 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.934 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:58.934 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.934 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.934 [2024-11-19 10:18:12.663024] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:58.934 [2024-11-19 10:18:12.663075] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:58.934 [2024-11-19 10:18:12.663084] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:58.934 [2024-11-19 10:18:12.663095] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:58.934 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.934 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:58.934 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.934 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:58.934 [2024-11-19 10:18:12.707962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:58.934 BaseBdev1 00:06:58.934 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.934 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:58.934 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:58.934 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:58.934 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:58.934 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:58.934 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:58.934 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:58.934 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.934 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.194 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.194 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:59.194 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.194 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.194 [ 00:06:59.194 { 00:06:59.194 "name": "BaseBdev1", 00:06:59.194 "aliases": [ 00:06:59.194 "a89425f7-b343-4a2a-b80d-fbfb2fa4a6ab" 00:06:59.194 ], 00:06:59.194 "product_name": "Malloc disk", 00:06:59.194 "block_size": 512, 00:06:59.194 "num_blocks": 65536, 00:06:59.194 "uuid": "a89425f7-b343-4a2a-b80d-fbfb2fa4a6ab", 00:06:59.194 "assigned_rate_limits": { 00:06:59.194 "rw_ios_per_sec": 0, 00:06:59.194 "rw_mbytes_per_sec": 0, 00:06:59.194 "r_mbytes_per_sec": 0, 00:06:59.194 "w_mbytes_per_sec": 0 00:06:59.194 }, 00:06:59.194 "claimed": true, 00:06:59.194 "claim_type": "exclusive_write", 00:06:59.194 "zoned": false, 00:06:59.194 "supported_io_types": { 00:06:59.194 "read": true, 00:06:59.194 "write": true, 00:06:59.194 "unmap": true, 00:06:59.194 "flush": true, 00:06:59.194 "reset": true, 00:06:59.194 "nvme_admin": false, 00:06:59.194 "nvme_io": false, 00:06:59.194 "nvme_io_md": false, 00:06:59.194 "write_zeroes": true, 00:06:59.194 "zcopy": true, 00:06:59.194 "get_zone_info": false, 00:06:59.194 "zone_management": false, 00:06:59.194 "zone_append": false, 00:06:59.194 "compare": false, 00:06:59.194 "compare_and_write": false, 00:06:59.194 "abort": true, 00:06:59.194 "seek_hole": false, 00:06:59.194 "seek_data": false, 00:06:59.194 "copy": true, 00:06:59.194 "nvme_iov_md": false 00:06:59.194 }, 00:06:59.194 "memory_domains": [ 00:06:59.194 { 00:06:59.194 "dma_device_id": "system", 00:06:59.194 "dma_device_type": 1 00:06:59.194 }, 00:06:59.194 { 00:06:59.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:59.194 "dma_device_type": 2 00:06:59.194 } 00:06:59.194 ], 00:06:59.194 "driver_specific": {} 00:06:59.194 } 00:06:59.194 ] 00:06:59.194 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.194 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:59.194 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:59.194 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:59.194 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:59.194 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:59.194 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.194 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:59.194 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.194 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.194 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.194 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.194 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.194 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:59.194 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.194 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.194 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.194 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.194 "name": "Existed_Raid", 00:06:59.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.194 "strip_size_kb": 64, 00:06:59.194 "state": "configuring", 00:06:59.194 "raid_level": "raid0", 00:06:59.194 "superblock": false, 00:06:59.194 "num_base_bdevs": 2, 00:06:59.194 "num_base_bdevs_discovered": 1, 00:06:59.194 "num_base_bdevs_operational": 2, 00:06:59.194 "base_bdevs_list": [ 00:06:59.194 { 00:06:59.194 "name": "BaseBdev1", 00:06:59.194 "uuid": "a89425f7-b343-4a2a-b80d-fbfb2fa4a6ab", 00:06:59.194 "is_configured": true, 00:06:59.194 "data_offset": 0, 00:06:59.194 "data_size": 65536 00:06:59.194 }, 00:06:59.194 { 00:06:59.194 "name": "BaseBdev2", 00:06:59.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.194 "is_configured": false, 00:06:59.194 "data_offset": 0, 00:06:59.195 "data_size": 0 00:06:59.195 } 00:06:59.195 ] 00:06:59.195 }' 00:06:59.195 10:18:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.195 10:18:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.455 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:59.455 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.455 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.455 [2024-11-19 10:18:13.139260] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:59.455 [2024-11-19 10:18:13.139393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:59.455 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.455 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:59.455 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.455 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.455 [2024-11-19 10:18:13.151283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:59.455 [2024-11-19 10:18:13.152983] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:59.455 [2024-11-19 10:18:13.153036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:59.455 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.455 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:59.455 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:59.455 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:59.455 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:59.455 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:59.455 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:59.455 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:59.455 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:59.455 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:59.455 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:59.455 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:59.455 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:59.455 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:59.455 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:59.455 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.455 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:59.455 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:59.455 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:59.455 "name": "Existed_Raid", 00:06:59.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.455 "strip_size_kb": 64, 00:06:59.455 "state": "configuring", 00:06:59.455 "raid_level": "raid0", 00:06:59.455 "superblock": false, 00:06:59.455 "num_base_bdevs": 2, 00:06:59.455 "num_base_bdevs_discovered": 1, 00:06:59.455 "num_base_bdevs_operational": 2, 00:06:59.455 "base_bdevs_list": [ 00:06:59.455 { 00:06:59.455 "name": "BaseBdev1", 00:06:59.455 "uuid": "a89425f7-b343-4a2a-b80d-fbfb2fa4a6ab", 00:06:59.455 "is_configured": true, 00:06:59.455 "data_offset": 0, 00:06:59.455 "data_size": 65536 00:06:59.455 }, 00:06:59.455 { 00:06:59.455 "name": "BaseBdev2", 00:06:59.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:59.455 "is_configured": false, 00:06:59.455 "data_offset": 0, 00:06:59.455 "data_size": 0 00:06:59.455 } 00:06:59.455 ] 00:06:59.455 }' 00:06:59.455 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:59.455 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.026 [2024-11-19 10:18:13.607332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:00.026 [2024-11-19 10:18:13.607442] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:00.026 [2024-11-19 10:18:13.607468] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:00.026 [2024-11-19 10:18:13.607773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:00.026 [2024-11-19 10:18:13.607981] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:00.026 [2024-11-19 10:18:13.608044] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:00.026 [2024-11-19 10:18:13.608340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:00.026 BaseBdev2 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.026 [ 00:07:00.026 { 00:07:00.026 "name": "BaseBdev2", 00:07:00.026 "aliases": [ 00:07:00.026 "271e8d15-3247-4e59-a5b9-7e901075af50" 00:07:00.026 ], 00:07:00.026 "product_name": "Malloc disk", 00:07:00.026 "block_size": 512, 00:07:00.026 "num_blocks": 65536, 00:07:00.026 "uuid": "271e8d15-3247-4e59-a5b9-7e901075af50", 00:07:00.026 "assigned_rate_limits": { 00:07:00.026 "rw_ios_per_sec": 0, 00:07:00.026 "rw_mbytes_per_sec": 0, 00:07:00.026 "r_mbytes_per_sec": 0, 00:07:00.026 "w_mbytes_per_sec": 0 00:07:00.026 }, 00:07:00.026 "claimed": true, 00:07:00.026 "claim_type": "exclusive_write", 00:07:00.026 "zoned": false, 00:07:00.026 "supported_io_types": { 00:07:00.026 "read": true, 00:07:00.026 "write": true, 00:07:00.026 "unmap": true, 00:07:00.026 "flush": true, 00:07:00.026 "reset": true, 00:07:00.026 "nvme_admin": false, 00:07:00.026 "nvme_io": false, 00:07:00.026 "nvme_io_md": false, 00:07:00.026 "write_zeroes": true, 00:07:00.026 "zcopy": true, 00:07:00.026 "get_zone_info": false, 00:07:00.026 "zone_management": false, 00:07:00.026 "zone_append": false, 00:07:00.026 "compare": false, 00:07:00.026 "compare_and_write": false, 00:07:00.026 "abort": true, 00:07:00.026 "seek_hole": false, 00:07:00.026 "seek_data": false, 00:07:00.026 "copy": true, 00:07:00.026 "nvme_iov_md": false 00:07:00.026 }, 00:07:00.026 "memory_domains": [ 00:07:00.026 { 00:07:00.026 "dma_device_id": "system", 00:07:00.026 "dma_device_type": 1 00:07:00.026 }, 00:07:00.026 { 00:07:00.026 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.026 "dma_device_type": 2 00:07:00.026 } 00:07:00.026 ], 00:07:00.026 "driver_specific": {} 00:07:00.026 } 00:07:00.026 ] 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:00.026 "name": "Existed_Raid", 00:07:00.026 "uuid": "4ed3012c-73ba-4723-bdee-2c655d87b5fe", 00:07:00.026 "strip_size_kb": 64, 00:07:00.026 "state": "online", 00:07:00.026 "raid_level": "raid0", 00:07:00.026 "superblock": false, 00:07:00.026 "num_base_bdevs": 2, 00:07:00.026 "num_base_bdevs_discovered": 2, 00:07:00.026 "num_base_bdevs_operational": 2, 00:07:00.026 "base_bdevs_list": [ 00:07:00.026 { 00:07:00.026 "name": "BaseBdev1", 00:07:00.026 "uuid": "a89425f7-b343-4a2a-b80d-fbfb2fa4a6ab", 00:07:00.026 "is_configured": true, 00:07:00.026 "data_offset": 0, 00:07:00.026 "data_size": 65536 00:07:00.026 }, 00:07:00.026 { 00:07:00.026 "name": "BaseBdev2", 00:07:00.026 "uuid": "271e8d15-3247-4e59-a5b9-7e901075af50", 00:07:00.026 "is_configured": true, 00:07:00.026 "data_offset": 0, 00:07:00.026 "data_size": 65536 00:07:00.026 } 00:07:00.026 ] 00:07:00.026 }' 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:00.026 10:18:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.286 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:00.286 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:00.286 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:00.287 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:00.287 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:00.287 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:00.287 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:00.287 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:00.287 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.287 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.287 [2024-11-19 10:18:14.046885] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:00.546 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.546 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:00.546 "name": "Existed_Raid", 00:07:00.546 "aliases": [ 00:07:00.546 "4ed3012c-73ba-4723-bdee-2c655d87b5fe" 00:07:00.546 ], 00:07:00.546 "product_name": "Raid Volume", 00:07:00.546 "block_size": 512, 00:07:00.546 "num_blocks": 131072, 00:07:00.546 "uuid": "4ed3012c-73ba-4723-bdee-2c655d87b5fe", 00:07:00.546 "assigned_rate_limits": { 00:07:00.546 "rw_ios_per_sec": 0, 00:07:00.546 "rw_mbytes_per_sec": 0, 00:07:00.546 "r_mbytes_per_sec": 0, 00:07:00.546 "w_mbytes_per_sec": 0 00:07:00.546 }, 00:07:00.546 "claimed": false, 00:07:00.546 "zoned": false, 00:07:00.546 "supported_io_types": { 00:07:00.546 "read": true, 00:07:00.546 "write": true, 00:07:00.546 "unmap": true, 00:07:00.546 "flush": true, 00:07:00.546 "reset": true, 00:07:00.546 "nvme_admin": false, 00:07:00.546 "nvme_io": false, 00:07:00.546 "nvme_io_md": false, 00:07:00.546 "write_zeroes": true, 00:07:00.546 "zcopy": false, 00:07:00.546 "get_zone_info": false, 00:07:00.546 "zone_management": false, 00:07:00.546 "zone_append": false, 00:07:00.546 "compare": false, 00:07:00.546 "compare_and_write": false, 00:07:00.546 "abort": false, 00:07:00.546 "seek_hole": false, 00:07:00.546 "seek_data": false, 00:07:00.546 "copy": false, 00:07:00.546 "nvme_iov_md": false 00:07:00.546 }, 00:07:00.546 "memory_domains": [ 00:07:00.546 { 00:07:00.546 "dma_device_id": "system", 00:07:00.546 "dma_device_type": 1 00:07:00.546 }, 00:07:00.546 { 00:07:00.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.546 "dma_device_type": 2 00:07:00.546 }, 00:07:00.546 { 00:07:00.546 "dma_device_id": "system", 00:07:00.546 "dma_device_type": 1 00:07:00.546 }, 00:07:00.546 { 00:07:00.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.546 "dma_device_type": 2 00:07:00.546 } 00:07:00.546 ], 00:07:00.546 "driver_specific": { 00:07:00.546 "raid": { 00:07:00.546 "uuid": "4ed3012c-73ba-4723-bdee-2c655d87b5fe", 00:07:00.546 "strip_size_kb": 64, 00:07:00.546 "state": "online", 00:07:00.546 "raid_level": "raid0", 00:07:00.546 "superblock": false, 00:07:00.546 "num_base_bdevs": 2, 00:07:00.546 "num_base_bdevs_discovered": 2, 00:07:00.546 "num_base_bdevs_operational": 2, 00:07:00.546 "base_bdevs_list": [ 00:07:00.546 { 00:07:00.546 "name": "BaseBdev1", 00:07:00.546 "uuid": "a89425f7-b343-4a2a-b80d-fbfb2fa4a6ab", 00:07:00.546 "is_configured": true, 00:07:00.546 "data_offset": 0, 00:07:00.546 "data_size": 65536 00:07:00.546 }, 00:07:00.546 { 00:07:00.546 "name": "BaseBdev2", 00:07:00.547 "uuid": "271e8d15-3247-4e59-a5b9-7e901075af50", 00:07:00.547 "is_configured": true, 00:07:00.547 "data_offset": 0, 00:07:00.547 "data_size": 65536 00:07:00.547 } 00:07:00.547 ] 00:07:00.547 } 00:07:00.547 } 00:07:00.547 }' 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:00.547 BaseBdev2' 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.547 [2024-11-19 10:18:14.226363] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:00.547 [2024-11-19 10:18:14.226394] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:00.547 [2024-11-19 10:18:14.226442] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:00.547 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:00.807 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:00.807 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.807 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:00.807 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.807 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.807 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.807 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:00.807 "name": "Existed_Raid", 00:07:00.807 "uuid": "4ed3012c-73ba-4723-bdee-2c655d87b5fe", 00:07:00.807 "strip_size_kb": 64, 00:07:00.807 "state": "offline", 00:07:00.807 "raid_level": "raid0", 00:07:00.807 "superblock": false, 00:07:00.807 "num_base_bdevs": 2, 00:07:00.807 "num_base_bdevs_discovered": 1, 00:07:00.807 "num_base_bdevs_operational": 1, 00:07:00.807 "base_bdevs_list": [ 00:07:00.807 { 00:07:00.807 "name": null, 00:07:00.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:00.807 "is_configured": false, 00:07:00.807 "data_offset": 0, 00:07:00.807 "data_size": 65536 00:07:00.807 }, 00:07:00.807 { 00:07:00.807 "name": "BaseBdev2", 00:07:00.807 "uuid": "271e8d15-3247-4e59-a5b9-7e901075af50", 00:07:00.807 "is_configured": true, 00:07:00.807 "data_offset": 0, 00:07:00.807 "data_size": 65536 00:07:00.807 } 00:07:00.807 ] 00:07:00.807 }' 00:07:00.807 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:00.807 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.067 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:01.067 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:01.067 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.067 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:01.067 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.067 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.067 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.067 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:01.067 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:01.067 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:01.067 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.067 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.067 [2024-11-19 10:18:14.739106] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:01.067 [2024-11-19 10:18:14.739156] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:01.067 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.067 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:01.067 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:01.067 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.067 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.067 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.067 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:01.067 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.327 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:01.327 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:01.327 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:01.327 10:18:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60577 00:07:01.327 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60577 ']' 00:07:01.327 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60577 00:07:01.327 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:01.327 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.327 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60577 00:07:01.327 killing process with pid 60577 00:07:01.327 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.327 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.327 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60577' 00:07:01.327 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60577 00:07:01.327 [2024-11-19 10:18:14.909060] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:01.327 10:18:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60577 00:07:01.327 [2024-11-19 10:18:14.925216] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:02.266 10:18:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:02.266 00:07:02.266 real 0m4.665s 00:07:02.266 user 0m6.679s 00:07:02.266 sys 0m0.751s 00:07:02.266 10:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.266 10:18:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.266 ************************************ 00:07:02.266 END TEST raid_state_function_test 00:07:02.266 ************************************ 00:07:02.266 10:18:16 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:02.266 10:18:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:02.266 10:18:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.266 10:18:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:02.266 ************************************ 00:07:02.266 START TEST raid_state_function_test_sb 00:07:02.266 ************************************ 00:07:02.266 10:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:02.266 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:02.266 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:02.266 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:02.266 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:02.267 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:02.267 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:02.267 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:02.267 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:02.267 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:02.267 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:02.267 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:02.267 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:02.267 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:02.267 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:02.267 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:02.267 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:02.267 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:02.267 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:02.267 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:02.267 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:02.267 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:02.267 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:02.267 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:02.267 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60819 00:07:02.267 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:02.267 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60819' 00:07:02.267 Process raid pid: 60819 00:07:02.267 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60819 00:07:02.267 10:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60819 ']' 00:07:02.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.267 10:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.267 10:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.267 10:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.267 10:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.267 10:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:02.526 [2024-11-19 10:18:16.123080] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:02.526 [2024-11-19 10:18:16.123287] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.526 [2024-11-19 10:18:16.295415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.795 [2024-11-19 10:18:16.405296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.089 [2024-11-19 10:18:16.596043] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.089 [2024-11-19 10:18:16.596154] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.348 10:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.348 10:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:03.348 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:03.348 10:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.348 10:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.348 [2024-11-19 10:18:16.947417] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:03.348 [2024-11-19 10:18:16.947512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:03.348 [2024-11-19 10:18:16.947545] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:03.348 [2024-11-19 10:18:16.947569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:03.348 10:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.348 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:03.348 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:03.348 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:03.348 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:03.348 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.348 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:03.348 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.348 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.348 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.348 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.348 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.348 10:18:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:03.348 10:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.348 10:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.348 10:18:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.348 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.348 "name": "Existed_Raid", 00:07:03.348 "uuid": "4bc4bb73-048e-4639-bfdc-e73d5a7b1667", 00:07:03.348 "strip_size_kb": 64, 00:07:03.348 "state": "configuring", 00:07:03.348 "raid_level": "raid0", 00:07:03.348 "superblock": true, 00:07:03.348 "num_base_bdevs": 2, 00:07:03.348 "num_base_bdevs_discovered": 0, 00:07:03.348 "num_base_bdevs_operational": 2, 00:07:03.348 "base_bdevs_list": [ 00:07:03.348 { 00:07:03.348 "name": "BaseBdev1", 00:07:03.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:03.348 "is_configured": false, 00:07:03.348 "data_offset": 0, 00:07:03.349 "data_size": 0 00:07:03.349 }, 00:07:03.349 { 00:07:03.349 "name": "BaseBdev2", 00:07:03.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:03.349 "is_configured": false, 00:07:03.349 "data_offset": 0, 00:07:03.349 "data_size": 0 00:07:03.349 } 00:07:03.349 ] 00:07:03.349 }' 00:07:03.349 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.349 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.919 [2024-11-19 10:18:17.394597] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:03.919 [2024-11-19 10:18:17.394682] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.919 [2024-11-19 10:18:17.406564] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:03.919 [2024-11-19 10:18:17.406647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:03.919 [2024-11-19 10:18:17.406680] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:03.919 [2024-11-19 10:18:17.406705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.919 [2024-11-19 10:18:17.451485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:03.919 BaseBdev1 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.919 [ 00:07:03.919 { 00:07:03.919 "name": "BaseBdev1", 00:07:03.919 "aliases": [ 00:07:03.919 "61c7f69a-ed79-4486-980c-367c7ffe9fb9" 00:07:03.919 ], 00:07:03.919 "product_name": "Malloc disk", 00:07:03.919 "block_size": 512, 00:07:03.919 "num_blocks": 65536, 00:07:03.919 "uuid": "61c7f69a-ed79-4486-980c-367c7ffe9fb9", 00:07:03.919 "assigned_rate_limits": { 00:07:03.919 "rw_ios_per_sec": 0, 00:07:03.919 "rw_mbytes_per_sec": 0, 00:07:03.919 "r_mbytes_per_sec": 0, 00:07:03.919 "w_mbytes_per_sec": 0 00:07:03.919 }, 00:07:03.919 "claimed": true, 00:07:03.919 "claim_type": "exclusive_write", 00:07:03.919 "zoned": false, 00:07:03.919 "supported_io_types": { 00:07:03.919 "read": true, 00:07:03.919 "write": true, 00:07:03.919 "unmap": true, 00:07:03.919 "flush": true, 00:07:03.919 "reset": true, 00:07:03.919 "nvme_admin": false, 00:07:03.919 "nvme_io": false, 00:07:03.919 "nvme_io_md": false, 00:07:03.919 "write_zeroes": true, 00:07:03.919 "zcopy": true, 00:07:03.919 "get_zone_info": false, 00:07:03.919 "zone_management": false, 00:07:03.919 "zone_append": false, 00:07:03.919 "compare": false, 00:07:03.919 "compare_and_write": false, 00:07:03.919 "abort": true, 00:07:03.919 "seek_hole": false, 00:07:03.919 "seek_data": false, 00:07:03.919 "copy": true, 00:07:03.919 "nvme_iov_md": false 00:07:03.919 }, 00:07:03.919 "memory_domains": [ 00:07:03.919 { 00:07:03.919 "dma_device_id": "system", 00:07:03.919 "dma_device_type": 1 00:07:03.919 }, 00:07:03.919 { 00:07:03.919 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:03.919 "dma_device_type": 2 00:07:03.919 } 00:07:03.919 ], 00:07:03.919 "driver_specific": {} 00:07:03.919 } 00:07:03.919 ] 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.919 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:03.919 "name": "Existed_Raid", 00:07:03.919 "uuid": "5c3e1c75-aad0-47a0-9169-e870b0a090d6", 00:07:03.919 "strip_size_kb": 64, 00:07:03.919 "state": "configuring", 00:07:03.919 "raid_level": "raid0", 00:07:03.919 "superblock": true, 00:07:03.919 "num_base_bdevs": 2, 00:07:03.919 "num_base_bdevs_discovered": 1, 00:07:03.919 "num_base_bdevs_operational": 2, 00:07:03.919 "base_bdevs_list": [ 00:07:03.919 { 00:07:03.919 "name": "BaseBdev1", 00:07:03.919 "uuid": "61c7f69a-ed79-4486-980c-367c7ffe9fb9", 00:07:03.919 "is_configured": true, 00:07:03.919 "data_offset": 2048, 00:07:03.919 "data_size": 63488 00:07:03.919 }, 00:07:03.919 { 00:07:03.919 "name": "BaseBdev2", 00:07:03.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:03.919 "is_configured": false, 00:07:03.919 "data_offset": 0, 00:07:03.919 "data_size": 0 00:07:03.920 } 00:07:03.920 ] 00:07:03.920 }' 00:07:03.920 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:03.920 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.179 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:04.179 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.179 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.179 [2024-11-19 10:18:17.862860] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:04.179 [2024-11-19 10:18:17.862956] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:04.179 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.179 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:04.179 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.179 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.179 [2024-11-19 10:18:17.874873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:04.179 [2024-11-19 10:18:17.876714] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:04.179 [2024-11-19 10:18:17.876791] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:04.179 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.179 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:04.179 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:04.179 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:04.179 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:04.179 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:04.179 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:04.179 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:04.179 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:04.179 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:04.179 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:04.179 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:04.179 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:04.179 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.179 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.179 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.179 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:04.179 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.179 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:04.179 "name": "Existed_Raid", 00:07:04.179 "uuid": "829bfd9c-e280-49c8-adbb-cfec4e51bb13", 00:07:04.179 "strip_size_kb": 64, 00:07:04.179 "state": "configuring", 00:07:04.179 "raid_level": "raid0", 00:07:04.179 "superblock": true, 00:07:04.179 "num_base_bdevs": 2, 00:07:04.179 "num_base_bdevs_discovered": 1, 00:07:04.179 "num_base_bdevs_operational": 2, 00:07:04.180 "base_bdevs_list": [ 00:07:04.180 { 00:07:04.180 "name": "BaseBdev1", 00:07:04.180 "uuid": "61c7f69a-ed79-4486-980c-367c7ffe9fb9", 00:07:04.180 "is_configured": true, 00:07:04.180 "data_offset": 2048, 00:07:04.180 "data_size": 63488 00:07:04.180 }, 00:07:04.180 { 00:07:04.180 "name": "BaseBdev2", 00:07:04.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:04.180 "is_configured": false, 00:07:04.180 "data_offset": 0, 00:07:04.180 "data_size": 0 00:07:04.180 } 00:07:04.180 ] 00:07:04.180 }' 00:07:04.180 10:18:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:04.180 10:18:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.750 [2024-11-19 10:18:18.317903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:04.750 [2024-11-19 10:18:18.318269] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:04.750 [2024-11-19 10:18:18.318323] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:04.750 [2024-11-19 10:18:18.318595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:04.750 [2024-11-19 10:18:18.318769] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:04.750 [2024-11-19 10:18:18.318812] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:04.750 BaseBdev2 00:07:04.750 [2024-11-19 10:18:18.319009] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.750 [ 00:07:04.750 { 00:07:04.750 "name": "BaseBdev2", 00:07:04.750 "aliases": [ 00:07:04.750 "e3ac669b-4ba1-4779-9c29-27cab0434771" 00:07:04.750 ], 00:07:04.750 "product_name": "Malloc disk", 00:07:04.750 "block_size": 512, 00:07:04.750 "num_blocks": 65536, 00:07:04.750 "uuid": "e3ac669b-4ba1-4779-9c29-27cab0434771", 00:07:04.750 "assigned_rate_limits": { 00:07:04.750 "rw_ios_per_sec": 0, 00:07:04.750 "rw_mbytes_per_sec": 0, 00:07:04.750 "r_mbytes_per_sec": 0, 00:07:04.750 "w_mbytes_per_sec": 0 00:07:04.750 }, 00:07:04.750 "claimed": true, 00:07:04.750 "claim_type": "exclusive_write", 00:07:04.750 "zoned": false, 00:07:04.750 "supported_io_types": { 00:07:04.750 "read": true, 00:07:04.750 "write": true, 00:07:04.750 "unmap": true, 00:07:04.750 "flush": true, 00:07:04.750 "reset": true, 00:07:04.750 "nvme_admin": false, 00:07:04.750 "nvme_io": false, 00:07:04.750 "nvme_io_md": false, 00:07:04.750 "write_zeroes": true, 00:07:04.750 "zcopy": true, 00:07:04.750 "get_zone_info": false, 00:07:04.750 "zone_management": false, 00:07:04.750 "zone_append": false, 00:07:04.750 "compare": false, 00:07:04.750 "compare_and_write": false, 00:07:04.750 "abort": true, 00:07:04.750 "seek_hole": false, 00:07:04.750 "seek_data": false, 00:07:04.750 "copy": true, 00:07:04.750 "nvme_iov_md": false 00:07:04.750 }, 00:07:04.750 "memory_domains": [ 00:07:04.750 { 00:07:04.750 "dma_device_id": "system", 00:07:04.750 "dma_device_type": 1 00:07:04.750 }, 00:07:04.750 { 00:07:04.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:04.750 "dma_device_type": 2 00:07:04.750 } 00:07:04.750 ], 00:07:04.750 "driver_specific": {} 00:07:04.750 } 00:07:04.750 ] 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.750 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:04.750 "name": "Existed_Raid", 00:07:04.750 "uuid": "829bfd9c-e280-49c8-adbb-cfec4e51bb13", 00:07:04.750 "strip_size_kb": 64, 00:07:04.750 "state": "online", 00:07:04.750 "raid_level": "raid0", 00:07:04.750 "superblock": true, 00:07:04.750 "num_base_bdevs": 2, 00:07:04.750 "num_base_bdevs_discovered": 2, 00:07:04.750 "num_base_bdevs_operational": 2, 00:07:04.750 "base_bdevs_list": [ 00:07:04.751 { 00:07:04.751 "name": "BaseBdev1", 00:07:04.751 "uuid": "61c7f69a-ed79-4486-980c-367c7ffe9fb9", 00:07:04.751 "is_configured": true, 00:07:04.751 "data_offset": 2048, 00:07:04.751 "data_size": 63488 00:07:04.751 }, 00:07:04.751 { 00:07:04.751 "name": "BaseBdev2", 00:07:04.751 "uuid": "e3ac669b-4ba1-4779-9c29-27cab0434771", 00:07:04.751 "is_configured": true, 00:07:04.751 "data_offset": 2048, 00:07:04.751 "data_size": 63488 00:07:04.751 } 00:07:04.751 ] 00:07:04.751 }' 00:07:04.751 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:04.751 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.010 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:05.010 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:05.010 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:05.010 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:05.010 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:05.010 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:05.010 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:05.010 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:05.010 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.010 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.010 [2024-11-19 10:18:18.773424] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:05.270 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.270 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:05.270 "name": "Existed_Raid", 00:07:05.270 "aliases": [ 00:07:05.270 "829bfd9c-e280-49c8-adbb-cfec4e51bb13" 00:07:05.270 ], 00:07:05.270 "product_name": "Raid Volume", 00:07:05.270 "block_size": 512, 00:07:05.270 "num_blocks": 126976, 00:07:05.270 "uuid": "829bfd9c-e280-49c8-adbb-cfec4e51bb13", 00:07:05.270 "assigned_rate_limits": { 00:07:05.270 "rw_ios_per_sec": 0, 00:07:05.270 "rw_mbytes_per_sec": 0, 00:07:05.270 "r_mbytes_per_sec": 0, 00:07:05.270 "w_mbytes_per_sec": 0 00:07:05.270 }, 00:07:05.270 "claimed": false, 00:07:05.270 "zoned": false, 00:07:05.270 "supported_io_types": { 00:07:05.270 "read": true, 00:07:05.270 "write": true, 00:07:05.270 "unmap": true, 00:07:05.270 "flush": true, 00:07:05.270 "reset": true, 00:07:05.270 "nvme_admin": false, 00:07:05.270 "nvme_io": false, 00:07:05.270 "nvme_io_md": false, 00:07:05.270 "write_zeroes": true, 00:07:05.270 "zcopy": false, 00:07:05.270 "get_zone_info": false, 00:07:05.270 "zone_management": false, 00:07:05.270 "zone_append": false, 00:07:05.270 "compare": false, 00:07:05.270 "compare_and_write": false, 00:07:05.270 "abort": false, 00:07:05.270 "seek_hole": false, 00:07:05.270 "seek_data": false, 00:07:05.270 "copy": false, 00:07:05.270 "nvme_iov_md": false 00:07:05.270 }, 00:07:05.270 "memory_domains": [ 00:07:05.270 { 00:07:05.270 "dma_device_id": "system", 00:07:05.270 "dma_device_type": 1 00:07:05.270 }, 00:07:05.270 { 00:07:05.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.270 "dma_device_type": 2 00:07:05.270 }, 00:07:05.270 { 00:07:05.270 "dma_device_id": "system", 00:07:05.270 "dma_device_type": 1 00:07:05.270 }, 00:07:05.270 { 00:07:05.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.270 "dma_device_type": 2 00:07:05.270 } 00:07:05.270 ], 00:07:05.270 "driver_specific": { 00:07:05.270 "raid": { 00:07:05.270 "uuid": "829bfd9c-e280-49c8-adbb-cfec4e51bb13", 00:07:05.270 "strip_size_kb": 64, 00:07:05.270 "state": "online", 00:07:05.270 "raid_level": "raid0", 00:07:05.270 "superblock": true, 00:07:05.270 "num_base_bdevs": 2, 00:07:05.270 "num_base_bdevs_discovered": 2, 00:07:05.270 "num_base_bdevs_operational": 2, 00:07:05.270 "base_bdevs_list": [ 00:07:05.270 { 00:07:05.270 "name": "BaseBdev1", 00:07:05.270 "uuid": "61c7f69a-ed79-4486-980c-367c7ffe9fb9", 00:07:05.270 "is_configured": true, 00:07:05.270 "data_offset": 2048, 00:07:05.270 "data_size": 63488 00:07:05.270 }, 00:07:05.270 { 00:07:05.270 "name": "BaseBdev2", 00:07:05.270 "uuid": "e3ac669b-4ba1-4779-9c29-27cab0434771", 00:07:05.270 "is_configured": true, 00:07:05.270 "data_offset": 2048, 00:07:05.270 "data_size": 63488 00:07:05.270 } 00:07:05.270 ] 00:07:05.270 } 00:07:05.270 } 00:07:05.270 }' 00:07:05.270 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:05.270 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:05.270 BaseBdev2' 00:07:05.270 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:05.270 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:05.270 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:05.270 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:05.270 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.270 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.270 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:05.270 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.270 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:05.270 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:05.270 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:05.270 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:05.270 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.270 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.270 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:05.270 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.270 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:05.270 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:05.270 10:18:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:05.270 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.270 10:18:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.270 [2024-11-19 10:18:18.992842] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:05.270 [2024-11-19 10:18:18.992925] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:05.270 [2024-11-19 10:18:18.993015] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:05.537 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.537 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:05.537 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:05.537 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:05.537 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:05.537 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:05.537 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:05.537 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:05.537 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:05.537 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:05.537 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:05.537 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:05.537 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:05.537 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:05.537 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:05.537 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:05.538 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.538 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:05.538 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.538 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.538 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.538 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:05.538 "name": "Existed_Raid", 00:07:05.538 "uuid": "829bfd9c-e280-49c8-adbb-cfec4e51bb13", 00:07:05.538 "strip_size_kb": 64, 00:07:05.538 "state": "offline", 00:07:05.538 "raid_level": "raid0", 00:07:05.538 "superblock": true, 00:07:05.538 "num_base_bdevs": 2, 00:07:05.538 "num_base_bdevs_discovered": 1, 00:07:05.538 "num_base_bdevs_operational": 1, 00:07:05.538 "base_bdevs_list": [ 00:07:05.538 { 00:07:05.538 "name": null, 00:07:05.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:05.538 "is_configured": false, 00:07:05.538 "data_offset": 0, 00:07:05.538 "data_size": 63488 00:07:05.538 }, 00:07:05.538 { 00:07:05.538 "name": "BaseBdev2", 00:07:05.538 "uuid": "e3ac669b-4ba1-4779-9c29-27cab0434771", 00:07:05.538 "is_configured": true, 00:07:05.538 "data_offset": 2048, 00:07:05.538 "data_size": 63488 00:07:05.538 } 00:07:05.538 ] 00:07:05.538 }' 00:07:05.538 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:05.538 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.802 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:05.802 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:05.802 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.802 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:05.802 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.802 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.802 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.802 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:05.802 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:05.802 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:05.802 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.802 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.802 [2024-11-19 10:18:19.533134] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:05.802 [2024-11-19 10:18:19.533227] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:06.061 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.061 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:06.061 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:06.061 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:06.061 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.061 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:06.061 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.061 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.061 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:06.061 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:06.061 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:06.061 10:18:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60819 00:07:06.061 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60819 ']' 00:07:06.062 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60819 00:07:06.062 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:06.062 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.062 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60819 00:07:06.062 killing process with pid 60819 00:07:06.062 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.062 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.062 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60819' 00:07:06.062 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60819 00:07:06.062 [2024-11-19 10:18:19.721145] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:06.062 10:18:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60819 00:07:06.062 [2024-11-19 10:18:19.737797] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:07.445 10:18:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:07.445 00:07:07.445 real 0m4.779s 00:07:07.445 user 0m6.880s 00:07:07.445 sys 0m0.737s 00:07:07.445 10:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.445 10:18:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.445 ************************************ 00:07:07.445 END TEST raid_state_function_test_sb 00:07:07.445 ************************************ 00:07:07.445 10:18:20 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:07.445 10:18:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:07.445 10:18:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.445 10:18:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:07.445 ************************************ 00:07:07.445 START TEST raid_superblock_test 00:07:07.445 ************************************ 00:07:07.445 10:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:07.445 10:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:07.445 10:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:07.445 10:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:07.445 10:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:07.445 10:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:07.445 10:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:07.445 10:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:07.445 10:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:07.445 10:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:07.445 10:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:07.445 10:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:07.445 10:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:07.445 10:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:07.445 10:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:07.446 10:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:07.446 10:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:07.446 10:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61071 00:07:07.446 10:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:07.446 10:18:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61071 00:07:07.446 10:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61071 ']' 00:07:07.446 10:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.446 10:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.446 10:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.446 10:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.446 10:18:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.446 [2024-11-19 10:18:20.962751] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:07.446 [2024-11-19 10:18:20.962959] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61071 ] 00:07:07.446 [2024-11-19 10:18:21.134554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.705 [2024-11-19 10:18:21.245733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.705 [2024-11-19 10:18:21.431915] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:07.705 [2024-11-19 10:18:21.432081] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:08.275 10:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.275 10:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:08.275 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:08.275 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:08.275 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:08.275 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:08.275 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:08.275 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:08.275 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:08.275 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:08.275 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:08.275 10:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.275 10:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.275 malloc1 00:07:08.275 10:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.275 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.276 [2024-11-19 10:18:21.839837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:08.276 [2024-11-19 10:18:21.839949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:08.276 [2024-11-19 10:18:21.840001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:08.276 [2024-11-19 10:18:21.840046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:08.276 [2024-11-19 10:18:21.842031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:08.276 [2024-11-19 10:18:21.842096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:08.276 pt1 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.276 malloc2 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.276 [2024-11-19 10:18:21.894010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:08.276 [2024-11-19 10:18:21.894104] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:08.276 [2024-11-19 10:18:21.894143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:08.276 [2024-11-19 10:18:21.894170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:08.276 [2024-11-19 10:18:21.896146] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:08.276 [2024-11-19 10:18:21.896215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:08.276 pt2 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.276 [2024-11-19 10:18:21.906045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:08.276 [2024-11-19 10:18:21.907778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:08.276 [2024-11-19 10:18:21.907974] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:08.276 [2024-11-19 10:18:21.908041] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:08.276 [2024-11-19 10:18:21.908287] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:08.276 [2024-11-19 10:18:21.908462] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:08.276 [2024-11-19 10:18:21.908503] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:08.276 [2024-11-19 10:18:21.908667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:08.276 "name": "raid_bdev1", 00:07:08.276 "uuid": "bca5414a-d06d-4078-b25c-c984f4440a29", 00:07:08.276 "strip_size_kb": 64, 00:07:08.276 "state": "online", 00:07:08.276 "raid_level": "raid0", 00:07:08.276 "superblock": true, 00:07:08.276 "num_base_bdevs": 2, 00:07:08.276 "num_base_bdevs_discovered": 2, 00:07:08.276 "num_base_bdevs_operational": 2, 00:07:08.276 "base_bdevs_list": [ 00:07:08.276 { 00:07:08.276 "name": "pt1", 00:07:08.276 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:08.276 "is_configured": true, 00:07:08.276 "data_offset": 2048, 00:07:08.276 "data_size": 63488 00:07:08.276 }, 00:07:08.276 { 00:07:08.276 "name": "pt2", 00:07:08.276 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:08.276 "is_configured": true, 00:07:08.276 "data_offset": 2048, 00:07:08.276 "data_size": 63488 00:07:08.276 } 00:07:08.276 ] 00:07:08.276 }' 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:08.276 10:18:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.847 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:08.847 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:08.847 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:08.847 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:08.847 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:08.847 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:08.847 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:08.847 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:08.847 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.847 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.847 [2024-11-19 10:18:22.349487] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:08.847 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.847 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:08.847 "name": "raid_bdev1", 00:07:08.847 "aliases": [ 00:07:08.847 "bca5414a-d06d-4078-b25c-c984f4440a29" 00:07:08.847 ], 00:07:08.847 "product_name": "Raid Volume", 00:07:08.847 "block_size": 512, 00:07:08.847 "num_blocks": 126976, 00:07:08.847 "uuid": "bca5414a-d06d-4078-b25c-c984f4440a29", 00:07:08.847 "assigned_rate_limits": { 00:07:08.847 "rw_ios_per_sec": 0, 00:07:08.847 "rw_mbytes_per_sec": 0, 00:07:08.847 "r_mbytes_per_sec": 0, 00:07:08.847 "w_mbytes_per_sec": 0 00:07:08.847 }, 00:07:08.847 "claimed": false, 00:07:08.847 "zoned": false, 00:07:08.847 "supported_io_types": { 00:07:08.847 "read": true, 00:07:08.847 "write": true, 00:07:08.847 "unmap": true, 00:07:08.847 "flush": true, 00:07:08.847 "reset": true, 00:07:08.847 "nvme_admin": false, 00:07:08.847 "nvme_io": false, 00:07:08.847 "nvme_io_md": false, 00:07:08.847 "write_zeroes": true, 00:07:08.847 "zcopy": false, 00:07:08.847 "get_zone_info": false, 00:07:08.847 "zone_management": false, 00:07:08.847 "zone_append": false, 00:07:08.847 "compare": false, 00:07:08.847 "compare_and_write": false, 00:07:08.847 "abort": false, 00:07:08.847 "seek_hole": false, 00:07:08.847 "seek_data": false, 00:07:08.847 "copy": false, 00:07:08.847 "nvme_iov_md": false 00:07:08.847 }, 00:07:08.847 "memory_domains": [ 00:07:08.847 { 00:07:08.847 "dma_device_id": "system", 00:07:08.847 "dma_device_type": 1 00:07:08.847 }, 00:07:08.847 { 00:07:08.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.847 "dma_device_type": 2 00:07:08.847 }, 00:07:08.847 { 00:07:08.847 "dma_device_id": "system", 00:07:08.847 "dma_device_type": 1 00:07:08.847 }, 00:07:08.847 { 00:07:08.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.847 "dma_device_type": 2 00:07:08.847 } 00:07:08.847 ], 00:07:08.847 "driver_specific": { 00:07:08.847 "raid": { 00:07:08.847 "uuid": "bca5414a-d06d-4078-b25c-c984f4440a29", 00:07:08.847 "strip_size_kb": 64, 00:07:08.847 "state": "online", 00:07:08.847 "raid_level": "raid0", 00:07:08.847 "superblock": true, 00:07:08.847 "num_base_bdevs": 2, 00:07:08.848 "num_base_bdevs_discovered": 2, 00:07:08.848 "num_base_bdevs_operational": 2, 00:07:08.848 "base_bdevs_list": [ 00:07:08.848 { 00:07:08.848 "name": "pt1", 00:07:08.848 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:08.848 "is_configured": true, 00:07:08.848 "data_offset": 2048, 00:07:08.848 "data_size": 63488 00:07:08.848 }, 00:07:08.848 { 00:07:08.848 "name": "pt2", 00:07:08.848 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:08.848 "is_configured": true, 00:07:08.848 "data_offset": 2048, 00:07:08.848 "data_size": 63488 00:07:08.848 } 00:07:08.848 ] 00:07:08.848 } 00:07:08.848 } 00:07:08.848 }' 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:08.848 pt2' 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.848 [2024-11-19 10:18:22.569138] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bca5414a-d06d-4078-b25c-c984f4440a29 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bca5414a-d06d-4078-b25c-c984f4440a29 ']' 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.848 [2024-11-19 10:18:22.592779] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:08.848 [2024-11-19 10:18:22.592840] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:08.848 [2024-11-19 10:18:22.592944] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:08.848 [2024-11-19 10:18:22.593023] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:08.848 [2024-11-19 10:18:22.593074] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.848 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.108 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:09.108 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:09.108 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:09.108 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:09.108 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.108 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.108 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.108 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:09.108 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:09.108 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.108 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.108 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.108 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:09.108 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:09.108 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.108 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.108 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.108 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:09.108 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:09.108 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.109 [2024-11-19 10:18:22.728579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:09.109 [2024-11-19 10:18:22.730463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:09.109 [2024-11-19 10:18:22.730572] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:09.109 [2024-11-19 10:18:22.730660] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:09.109 [2024-11-19 10:18:22.730709] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:09.109 [2024-11-19 10:18:22.730741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:09.109 request: 00:07:09.109 { 00:07:09.109 "name": "raid_bdev1", 00:07:09.109 "raid_level": "raid0", 00:07:09.109 "base_bdevs": [ 00:07:09.109 "malloc1", 00:07:09.109 "malloc2" 00:07:09.109 ], 00:07:09.109 "strip_size_kb": 64, 00:07:09.109 "superblock": false, 00:07:09.109 "method": "bdev_raid_create", 00:07:09.109 "req_id": 1 00:07:09.109 } 00:07:09.109 Got JSON-RPC error response 00:07:09.109 response: 00:07:09.109 { 00:07:09.109 "code": -17, 00:07:09.109 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:09.109 } 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.109 [2024-11-19 10:18:22.780448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:09.109 [2024-11-19 10:18:22.780537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:09.109 [2024-11-19 10:18:22.780571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:09.109 [2024-11-19 10:18:22.780599] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:09.109 [2024-11-19 10:18:22.782666] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:09.109 [2024-11-19 10:18:22.782736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:09.109 [2024-11-19 10:18:22.782843] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:09.109 [2024-11-19 10:18:22.782930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:09.109 pt1 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.109 "name": "raid_bdev1", 00:07:09.109 "uuid": "bca5414a-d06d-4078-b25c-c984f4440a29", 00:07:09.109 "strip_size_kb": 64, 00:07:09.109 "state": "configuring", 00:07:09.109 "raid_level": "raid0", 00:07:09.109 "superblock": true, 00:07:09.109 "num_base_bdevs": 2, 00:07:09.109 "num_base_bdevs_discovered": 1, 00:07:09.109 "num_base_bdevs_operational": 2, 00:07:09.109 "base_bdevs_list": [ 00:07:09.109 { 00:07:09.109 "name": "pt1", 00:07:09.109 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:09.109 "is_configured": true, 00:07:09.109 "data_offset": 2048, 00:07:09.109 "data_size": 63488 00:07:09.109 }, 00:07:09.109 { 00:07:09.109 "name": null, 00:07:09.109 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:09.109 "is_configured": false, 00:07:09.109 "data_offset": 2048, 00:07:09.109 "data_size": 63488 00:07:09.109 } 00:07:09.109 ] 00:07:09.109 }' 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.109 10:18:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.678 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:09.678 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:09.678 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:09.678 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:09.678 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.678 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.678 [2024-11-19 10:18:23.215749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:09.678 [2024-11-19 10:18:23.215888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:09.678 [2024-11-19 10:18:23.215931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:09.678 [2024-11-19 10:18:23.215963] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:09.678 [2024-11-19 10:18:23.216443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:09.678 [2024-11-19 10:18:23.216506] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:09.678 [2024-11-19 10:18:23.216615] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:09.678 [2024-11-19 10:18:23.216667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:09.678 [2024-11-19 10:18:23.216818] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:09.678 [2024-11-19 10:18:23.216858] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:09.678 [2024-11-19 10:18:23.217123] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:09.678 [2024-11-19 10:18:23.217304] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:09.678 [2024-11-19 10:18:23.217346] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:09.678 [2024-11-19 10:18:23.217515] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.678 pt2 00:07:09.678 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.678 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:09.678 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:09.678 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:09.678 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:09.678 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:09.678 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:09.678 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:09.678 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:09.678 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.678 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.678 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.678 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.678 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.678 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.678 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.678 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:09.678 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.678 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.678 "name": "raid_bdev1", 00:07:09.678 "uuid": "bca5414a-d06d-4078-b25c-c984f4440a29", 00:07:09.678 "strip_size_kb": 64, 00:07:09.678 "state": "online", 00:07:09.678 "raid_level": "raid0", 00:07:09.678 "superblock": true, 00:07:09.678 "num_base_bdevs": 2, 00:07:09.678 "num_base_bdevs_discovered": 2, 00:07:09.678 "num_base_bdevs_operational": 2, 00:07:09.678 "base_bdevs_list": [ 00:07:09.678 { 00:07:09.678 "name": "pt1", 00:07:09.678 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:09.678 "is_configured": true, 00:07:09.678 "data_offset": 2048, 00:07:09.678 "data_size": 63488 00:07:09.678 }, 00:07:09.678 { 00:07:09.678 "name": "pt2", 00:07:09.678 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:09.678 "is_configured": true, 00:07:09.678 "data_offset": 2048, 00:07:09.678 "data_size": 63488 00:07:09.678 } 00:07:09.678 ] 00:07:09.678 }' 00:07:09.678 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.678 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.938 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:09.938 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:09.938 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:09.938 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:09.938 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:09.938 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:09.938 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:09.938 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:09.938 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.938 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.938 [2024-11-19 10:18:23.603364] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:09.938 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.938 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:09.938 "name": "raid_bdev1", 00:07:09.938 "aliases": [ 00:07:09.938 "bca5414a-d06d-4078-b25c-c984f4440a29" 00:07:09.938 ], 00:07:09.938 "product_name": "Raid Volume", 00:07:09.938 "block_size": 512, 00:07:09.938 "num_blocks": 126976, 00:07:09.938 "uuid": "bca5414a-d06d-4078-b25c-c984f4440a29", 00:07:09.938 "assigned_rate_limits": { 00:07:09.938 "rw_ios_per_sec": 0, 00:07:09.938 "rw_mbytes_per_sec": 0, 00:07:09.938 "r_mbytes_per_sec": 0, 00:07:09.938 "w_mbytes_per_sec": 0 00:07:09.938 }, 00:07:09.938 "claimed": false, 00:07:09.938 "zoned": false, 00:07:09.938 "supported_io_types": { 00:07:09.938 "read": true, 00:07:09.938 "write": true, 00:07:09.938 "unmap": true, 00:07:09.938 "flush": true, 00:07:09.938 "reset": true, 00:07:09.938 "nvme_admin": false, 00:07:09.938 "nvme_io": false, 00:07:09.938 "nvme_io_md": false, 00:07:09.938 "write_zeroes": true, 00:07:09.938 "zcopy": false, 00:07:09.938 "get_zone_info": false, 00:07:09.938 "zone_management": false, 00:07:09.938 "zone_append": false, 00:07:09.938 "compare": false, 00:07:09.938 "compare_and_write": false, 00:07:09.938 "abort": false, 00:07:09.938 "seek_hole": false, 00:07:09.938 "seek_data": false, 00:07:09.938 "copy": false, 00:07:09.938 "nvme_iov_md": false 00:07:09.938 }, 00:07:09.938 "memory_domains": [ 00:07:09.938 { 00:07:09.938 "dma_device_id": "system", 00:07:09.938 "dma_device_type": 1 00:07:09.938 }, 00:07:09.938 { 00:07:09.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.938 "dma_device_type": 2 00:07:09.938 }, 00:07:09.938 { 00:07:09.938 "dma_device_id": "system", 00:07:09.939 "dma_device_type": 1 00:07:09.939 }, 00:07:09.939 { 00:07:09.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.939 "dma_device_type": 2 00:07:09.939 } 00:07:09.939 ], 00:07:09.939 "driver_specific": { 00:07:09.939 "raid": { 00:07:09.939 "uuid": "bca5414a-d06d-4078-b25c-c984f4440a29", 00:07:09.939 "strip_size_kb": 64, 00:07:09.939 "state": "online", 00:07:09.939 "raid_level": "raid0", 00:07:09.939 "superblock": true, 00:07:09.939 "num_base_bdevs": 2, 00:07:09.939 "num_base_bdevs_discovered": 2, 00:07:09.939 "num_base_bdevs_operational": 2, 00:07:09.939 "base_bdevs_list": [ 00:07:09.939 { 00:07:09.939 "name": "pt1", 00:07:09.939 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:09.939 "is_configured": true, 00:07:09.939 "data_offset": 2048, 00:07:09.939 "data_size": 63488 00:07:09.939 }, 00:07:09.939 { 00:07:09.939 "name": "pt2", 00:07:09.939 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:09.939 "is_configured": true, 00:07:09.939 "data_offset": 2048, 00:07:09.939 "data_size": 63488 00:07:09.939 } 00:07:09.939 ] 00:07:09.939 } 00:07:09.939 } 00:07:09.939 }' 00:07:09.939 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:09.939 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:09.939 pt2' 00:07:09.939 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.207 [2024-11-19 10:18:23.834876] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bca5414a-d06d-4078-b25c-c984f4440a29 '!=' bca5414a-d06d-4078-b25c-c984f4440a29 ']' 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61071 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61071 ']' 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61071 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61071 00:07:10.207 killing process with pid 61071 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61071' 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61071 00:07:10.207 [2024-11-19 10:18:23.917697] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:10.207 [2024-11-19 10:18:23.917792] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:10.207 [2024-11-19 10:18:23.917838] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:10.207 [2024-11-19 10:18:23.917850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:10.207 10:18:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61071 00:07:10.480 [2024-11-19 10:18:24.114861] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:11.419 10:18:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:11.419 00:07:11.419 real 0m4.272s 00:07:11.419 user 0m6.034s 00:07:11.419 sys 0m0.667s 00:07:11.419 10:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.419 10:18:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.419 ************************************ 00:07:11.419 END TEST raid_superblock_test 00:07:11.419 ************************************ 00:07:11.681 10:18:25 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:11.681 10:18:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:11.681 10:18:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.681 10:18:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:11.681 ************************************ 00:07:11.681 START TEST raid_read_error_test 00:07:11.681 ************************************ 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kj8hDhoooi 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61277 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61277 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61277 ']' 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.681 10:18:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.681 [2024-11-19 10:18:25.321942] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:11.681 [2024-11-19 10:18:25.322088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61277 ] 00:07:11.941 [2024-11-19 10:18:25.493139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.941 [2024-11-19 10:18:25.601543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.201 [2024-11-19 10:18:25.784637] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.202 [2024-11-19 10:18:25.784691] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.461 10:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.461 10:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:12.461 10:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:12.461 10:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:12.461 10:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.461 10:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.461 BaseBdev1_malloc 00:07:12.462 10:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.462 10:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:12.462 10:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.462 10:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.462 true 00:07:12.462 10:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.462 10:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:12.462 10:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.462 10:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.462 [2024-11-19 10:18:26.175698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:12.462 [2024-11-19 10:18:26.175810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:12.462 [2024-11-19 10:18:26.175846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:12.462 [2024-11-19 10:18:26.175876] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:12.462 [2024-11-19 10:18:26.177922] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:12.462 [2024-11-19 10:18:26.178002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:12.462 BaseBdev1 00:07:12.462 10:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.462 10:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:12.462 10:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:12.462 10:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.462 10:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.462 BaseBdev2_malloc 00:07:12.462 10:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.462 10:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:12.462 10:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.462 10:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.462 true 00:07:12.462 10:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.462 10:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:12.462 10:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.462 10:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.462 [2024-11-19 10:18:26.236840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:12.462 [2024-11-19 10:18:26.236893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:12.462 [2024-11-19 10:18:26.236924] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:12.462 [2024-11-19 10:18:26.236934] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:12.462 [2024-11-19 10:18:26.238939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:12.462 [2024-11-19 10:18:26.239015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:12.722 BaseBdev2 00:07:12.722 10:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.722 10:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:12.722 10:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.722 10:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.722 [2024-11-19 10:18:26.248891] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:12.722 [2024-11-19 10:18:26.250646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:12.722 [2024-11-19 10:18:26.250879] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:12.722 [2024-11-19 10:18:26.250927] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:12.722 [2024-11-19 10:18:26.251185] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:12.722 [2024-11-19 10:18:26.251403] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:12.722 [2024-11-19 10:18:26.251448] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:12.722 [2024-11-19 10:18:26.251620] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.722 10:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.722 10:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:12.722 10:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:12.722 10:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:12.722 10:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:12.722 10:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:12.722 10:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:12.722 10:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:12.722 10:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:12.722 10:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:12.722 10:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:12.722 10:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:12.722 10:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:12.722 10:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.722 10:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.722 10:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.722 10:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:12.722 "name": "raid_bdev1", 00:07:12.722 "uuid": "e8ee643e-5ecb-47b0-9b2e-7e2b7b3a4adb", 00:07:12.722 "strip_size_kb": 64, 00:07:12.722 "state": "online", 00:07:12.722 "raid_level": "raid0", 00:07:12.722 "superblock": true, 00:07:12.722 "num_base_bdevs": 2, 00:07:12.722 "num_base_bdevs_discovered": 2, 00:07:12.722 "num_base_bdevs_operational": 2, 00:07:12.722 "base_bdevs_list": [ 00:07:12.722 { 00:07:12.722 "name": "BaseBdev1", 00:07:12.722 "uuid": "12313f60-eddd-5071-ad88-3c3f6e067f83", 00:07:12.722 "is_configured": true, 00:07:12.722 "data_offset": 2048, 00:07:12.722 "data_size": 63488 00:07:12.722 }, 00:07:12.722 { 00:07:12.722 "name": "BaseBdev2", 00:07:12.722 "uuid": "5a022876-1493-5ea0-bc6f-6a4cc0e48342", 00:07:12.722 "is_configured": true, 00:07:12.722 "data_offset": 2048, 00:07:12.722 "data_size": 63488 00:07:12.722 } 00:07:12.722 ] 00:07:12.722 }' 00:07:12.722 10:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:12.722 10:18:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.982 10:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:12.982 10:18:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:13.242 [2024-11-19 10:18:26.785133] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:14.196 10:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:14.196 10:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.196 10:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.196 10:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.196 10:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:14.196 10:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:14.196 10:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:14.196 10:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:14.196 10:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:14.196 10:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:14.196 10:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:14.196 10:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.196 10:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.196 10:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.196 10:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.196 10:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.196 10:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.196 10:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.196 10:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:14.197 10:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.197 10:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.197 10:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.197 10:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.197 "name": "raid_bdev1", 00:07:14.197 "uuid": "e8ee643e-5ecb-47b0-9b2e-7e2b7b3a4adb", 00:07:14.197 "strip_size_kb": 64, 00:07:14.197 "state": "online", 00:07:14.197 "raid_level": "raid0", 00:07:14.197 "superblock": true, 00:07:14.197 "num_base_bdevs": 2, 00:07:14.197 "num_base_bdevs_discovered": 2, 00:07:14.197 "num_base_bdevs_operational": 2, 00:07:14.197 "base_bdevs_list": [ 00:07:14.197 { 00:07:14.197 "name": "BaseBdev1", 00:07:14.197 "uuid": "12313f60-eddd-5071-ad88-3c3f6e067f83", 00:07:14.197 "is_configured": true, 00:07:14.197 "data_offset": 2048, 00:07:14.197 "data_size": 63488 00:07:14.197 }, 00:07:14.197 { 00:07:14.197 "name": "BaseBdev2", 00:07:14.197 "uuid": "5a022876-1493-5ea0-bc6f-6a4cc0e48342", 00:07:14.197 "is_configured": true, 00:07:14.197 "data_offset": 2048, 00:07:14.197 "data_size": 63488 00:07:14.197 } 00:07:14.197 ] 00:07:14.197 }' 00:07:14.197 10:18:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.197 10:18:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.457 10:18:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:14.457 10:18:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.457 10:18:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.457 [2024-11-19 10:18:28.218596] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:14.457 [2024-11-19 10:18:28.218694] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:14.457 [2024-11-19 10:18:28.221286] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:14.457 [2024-11-19 10:18:28.221371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.457 [2024-11-19 10:18:28.221421] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:14.457 [2024-11-19 10:18:28.221461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:14.457 { 00:07:14.457 "results": [ 00:07:14.457 { 00:07:14.457 "job": "raid_bdev1", 00:07:14.457 "core_mask": "0x1", 00:07:14.457 "workload": "randrw", 00:07:14.457 "percentage": 50, 00:07:14.457 "status": "finished", 00:07:14.457 "queue_depth": 1, 00:07:14.457 "io_size": 131072, 00:07:14.457 "runtime": 1.43458, 00:07:14.457 "iops": 17481.074600231426, 00:07:14.457 "mibps": 2185.1343250289283, 00:07:14.457 "io_failed": 1, 00:07:14.457 "io_timeout": 0, 00:07:14.457 "avg_latency_us": 79.32766017463418, 00:07:14.457 "min_latency_us": 24.370305676855896, 00:07:14.457 "max_latency_us": 1352.216593886463 00:07:14.457 } 00:07:14.457 ], 00:07:14.457 "core_count": 1 00:07:14.457 } 00:07:14.457 10:18:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.457 10:18:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61277 00:07:14.457 10:18:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61277 ']' 00:07:14.457 10:18:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61277 00:07:14.457 10:18:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:14.457 10:18:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.457 10:18:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61277 00:07:14.718 killing process with pid 61277 00:07:14.718 10:18:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:14.718 10:18:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:14.718 10:18:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61277' 00:07:14.718 10:18:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61277 00:07:14.718 [2024-11-19 10:18:28.268462] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:14.718 10:18:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61277 00:07:14.718 [2024-11-19 10:18:28.406909] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:16.098 10:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:16.098 10:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kj8hDhoooi 00:07:16.098 10:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:16.098 10:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:07:16.098 10:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:16.098 10:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:16.098 10:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:16.098 10:18:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:07:16.098 00:07:16.098 real 0m4.295s 00:07:16.098 user 0m5.186s 00:07:16.098 sys 0m0.527s 00:07:16.098 ************************************ 00:07:16.098 END TEST raid_read_error_test 00:07:16.098 ************************************ 00:07:16.098 10:18:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.098 10:18:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.098 10:18:29 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:16.098 10:18:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:16.098 10:18:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.098 10:18:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:16.098 ************************************ 00:07:16.098 START TEST raid_write_error_test 00:07:16.098 ************************************ 00:07:16.098 10:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:16.098 10:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:16.098 10:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:16.098 10:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:16.098 10:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:16.098 10:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:16.099 10:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:16.099 10:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:16.099 10:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:16.099 10:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:16.099 10:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:16.099 10:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:16.099 10:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:16.099 10:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:16.099 10:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:16.099 10:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:16.099 10:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:16.099 10:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:16.099 10:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:16.099 10:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:16.099 10:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:16.099 10:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:16.099 10:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:16.099 10:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6AOS91OKA9 00:07:16.099 10:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61423 00:07:16.099 10:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:16.099 10:18:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61423 00:07:16.099 10:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61423 ']' 00:07:16.099 10:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.099 10:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.099 10:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.099 10:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.099 10:18:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.099 [2024-11-19 10:18:29.683418] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:16.099 [2024-11-19 10:18:29.683632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61423 ] 00:07:16.099 [2024-11-19 10:18:29.856697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.358 [2024-11-19 10:18:29.966369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.619 [2024-11-19 10:18:30.161340] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.619 [2024-11-19 10:18:30.161370] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:16.887 10:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.887 10:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:16.887 10:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:16.887 10:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:16.887 10:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.887 10:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.887 BaseBdev1_malloc 00:07:16.887 10:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.887 10:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:16.887 10:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.887 10:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.887 true 00:07:16.887 10:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.887 10:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:16.887 10:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.887 10:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.887 [2024-11-19 10:18:30.565570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:16.887 [2024-11-19 10:18:30.565666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:16.887 [2024-11-19 10:18:30.565703] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:16.887 [2024-11-19 10:18:30.565733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:16.887 [2024-11-19 10:18:30.567792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:16.887 [2024-11-19 10:18:30.567868] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:16.887 BaseBdev1 00:07:16.887 10:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.887 10:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:16.887 10:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:16.887 10:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.887 10:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.887 BaseBdev2_malloc 00:07:16.887 10:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.887 10:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:16.888 10:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.888 10:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.888 true 00:07:16.888 10:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.888 10:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:16.888 10:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.888 10:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.888 [2024-11-19 10:18:30.631169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:16.888 [2024-11-19 10:18:30.631221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:16.888 [2024-11-19 10:18:30.631252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:16.888 [2024-11-19 10:18:30.631262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:16.888 [2024-11-19 10:18:30.633227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:16.888 [2024-11-19 10:18:30.633263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:16.888 BaseBdev2 00:07:16.888 10:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.888 10:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:16.888 10:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.888 10:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.888 [2024-11-19 10:18:30.643202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:16.888 [2024-11-19 10:18:30.644968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:16.888 [2024-11-19 10:18:30.645219] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:16.888 [2024-11-19 10:18:30.645268] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:16.888 [2024-11-19 10:18:30.645502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:16.888 [2024-11-19 10:18:30.645717] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:16.888 [2024-11-19 10:18:30.645760] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:16.888 [2024-11-19 10:18:30.645947] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:16.888 10:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.888 10:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:16.888 10:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:16.888 10:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:16.888 10:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:16.888 10:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:16.888 10:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:16.888 10:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:16.888 10:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:16.888 10:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:16.888 10:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:16.888 10:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.888 10:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:16.888 10:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.888 10:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.167 10:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.167 10:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:17.167 "name": "raid_bdev1", 00:07:17.167 "uuid": "2d616de7-a744-4e3c-9475-8a424e4ceb27", 00:07:17.167 "strip_size_kb": 64, 00:07:17.167 "state": "online", 00:07:17.167 "raid_level": "raid0", 00:07:17.167 "superblock": true, 00:07:17.167 "num_base_bdevs": 2, 00:07:17.167 "num_base_bdevs_discovered": 2, 00:07:17.167 "num_base_bdevs_operational": 2, 00:07:17.167 "base_bdevs_list": [ 00:07:17.167 { 00:07:17.167 "name": "BaseBdev1", 00:07:17.167 "uuid": "0d2570f0-2fad-5f00-abc2-6535ba3efd59", 00:07:17.167 "is_configured": true, 00:07:17.167 "data_offset": 2048, 00:07:17.167 "data_size": 63488 00:07:17.167 }, 00:07:17.167 { 00:07:17.167 "name": "BaseBdev2", 00:07:17.167 "uuid": "3bca6189-ae5c-5665-87aa-acc0733a3714", 00:07:17.167 "is_configured": true, 00:07:17.167 "data_offset": 2048, 00:07:17.167 "data_size": 63488 00:07:17.167 } 00:07:17.167 ] 00:07:17.167 }' 00:07:17.167 10:18:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:17.167 10:18:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.427 10:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:17.427 10:18:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:17.427 [2024-11-19 10:18:31.123657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:18.369 10:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:18.369 10:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.369 10:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.369 10:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.369 10:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:18.369 10:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:18.369 10:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:18.369 10:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:18.369 10:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:18.369 10:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:18.369 10:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:18.369 10:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.369 10:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.369 10:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.369 10:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.369 10:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.369 10:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.369 10:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.369 10:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:18.369 10:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.369 10:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.369 10:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.369 10:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.369 "name": "raid_bdev1", 00:07:18.369 "uuid": "2d616de7-a744-4e3c-9475-8a424e4ceb27", 00:07:18.369 "strip_size_kb": 64, 00:07:18.369 "state": "online", 00:07:18.369 "raid_level": "raid0", 00:07:18.369 "superblock": true, 00:07:18.369 "num_base_bdevs": 2, 00:07:18.369 "num_base_bdevs_discovered": 2, 00:07:18.369 "num_base_bdevs_operational": 2, 00:07:18.369 "base_bdevs_list": [ 00:07:18.369 { 00:07:18.369 "name": "BaseBdev1", 00:07:18.369 "uuid": "0d2570f0-2fad-5f00-abc2-6535ba3efd59", 00:07:18.369 "is_configured": true, 00:07:18.369 "data_offset": 2048, 00:07:18.369 "data_size": 63488 00:07:18.369 }, 00:07:18.369 { 00:07:18.369 "name": "BaseBdev2", 00:07:18.369 "uuid": "3bca6189-ae5c-5665-87aa-acc0733a3714", 00:07:18.369 "is_configured": true, 00:07:18.369 "data_offset": 2048, 00:07:18.369 "data_size": 63488 00:07:18.369 } 00:07:18.369 ] 00:07:18.369 }' 00:07:18.369 10:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.369 10:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.939 10:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:18.939 10:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.939 10:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.939 [2024-11-19 10:18:32.501481] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:18.939 [2024-11-19 10:18:32.501576] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:18.939 [2024-11-19 10:18:32.504335] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:18.939 [2024-11-19 10:18:32.504421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:18.939 [2024-11-19 10:18:32.504471] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:18.939 [2024-11-19 10:18:32.504513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:18.939 { 00:07:18.939 "results": [ 00:07:18.939 { 00:07:18.939 "job": "raid_bdev1", 00:07:18.939 "core_mask": "0x1", 00:07:18.939 "workload": "randrw", 00:07:18.939 "percentage": 50, 00:07:18.939 "status": "finished", 00:07:18.939 "queue_depth": 1, 00:07:18.939 "io_size": 131072, 00:07:18.939 "runtime": 1.378879, 00:07:18.939 "iops": 17342.34838589898, 00:07:18.939 "mibps": 2167.7935482373723, 00:07:18.939 "io_failed": 1, 00:07:18.939 "io_timeout": 0, 00:07:18.939 "avg_latency_us": 79.85990088939515, 00:07:18.939 "min_latency_us": 24.258515283842794, 00:07:18.939 "max_latency_us": 1402.2986899563318 00:07:18.939 } 00:07:18.939 ], 00:07:18.939 "core_count": 1 00:07:18.939 } 00:07:18.939 10:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.939 10:18:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61423 00:07:18.939 10:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61423 ']' 00:07:18.939 10:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61423 00:07:18.939 10:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:18.939 10:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:18.939 10:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61423 00:07:18.939 10:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:18.939 10:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:18.939 10:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61423' 00:07:18.939 killing process with pid 61423 00:07:18.939 10:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61423 00:07:18.939 [2024-11-19 10:18:32.545741] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:18.939 10:18:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61423 00:07:18.939 [2024-11-19 10:18:32.678015] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:20.320 10:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6AOS91OKA9 00:07:20.320 10:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:20.320 10:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:20.320 10:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:20.320 10:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:20.320 10:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:20.320 10:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:20.320 10:18:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:20.320 00:07:20.320 real 0m4.191s 00:07:20.320 user 0m5.008s 00:07:20.320 sys 0m0.518s 00:07:20.320 10:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.320 ************************************ 00:07:20.320 END TEST raid_write_error_test 00:07:20.320 ************************************ 00:07:20.320 10:18:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.320 10:18:33 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:20.320 10:18:33 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:20.320 10:18:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:20.320 10:18:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.320 10:18:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:20.320 ************************************ 00:07:20.320 START TEST raid_state_function_test 00:07:20.320 ************************************ 00:07:20.320 10:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:20.320 10:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:20.320 10:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:20.320 10:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:20.320 10:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:20.321 10:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:20.321 10:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:20.321 10:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:20.321 10:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:20.321 10:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:20.321 10:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:20.321 10:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:20.321 10:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:20.321 10:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:20.321 10:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:20.321 10:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:20.321 10:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:20.321 10:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:20.321 10:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:20.321 10:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:20.321 10:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:20.321 10:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:20.321 10:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:20.321 10:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:20.321 Process raid pid: 61561 00:07:20.321 10:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61561 00:07:20.321 10:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:20.321 10:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61561' 00:07:20.321 10:18:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61561 00:07:20.321 10:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61561 ']' 00:07:20.321 10:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.321 10:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:20.321 10:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.321 10:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:20.321 10:18:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.321 [2024-11-19 10:18:33.934091] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:20.321 [2024-11-19 10:18:33.934280] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:20.581 [2024-11-19 10:18:34.106217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.581 [2024-11-19 10:18:34.211378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.841 [2024-11-19 10:18:34.409346] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:20.841 [2024-11-19 10:18:34.409373] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.102 10:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.102 10:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:21.102 10:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:21.102 10:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.102 10:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.102 [2024-11-19 10:18:34.754387] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:21.102 [2024-11-19 10:18:34.754440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:21.102 [2024-11-19 10:18:34.754449] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:21.102 [2024-11-19 10:18:34.754474] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:21.102 10:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.102 10:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:21.102 10:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.102 10:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:21.102 10:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:21.102 10:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.102 10:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.102 10:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.102 10:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.102 10:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.102 10:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.102 10:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.102 10:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.102 10:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.102 10:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.102 10:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.102 10:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.102 "name": "Existed_Raid", 00:07:21.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.102 "strip_size_kb": 64, 00:07:21.102 "state": "configuring", 00:07:21.102 "raid_level": "concat", 00:07:21.102 "superblock": false, 00:07:21.102 "num_base_bdevs": 2, 00:07:21.102 "num_base_bdevs_discovered": 0, 00:07:21.102 "num_base_bdevs_operational": 2, 00:07:21.102 "base_bdevs_list": [ 00:07:21.102 { 00:07:21.102 "name": "BaseBdev1", 00:07:21.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.102 "is_configured": false, 00:07:21.102 "data_offset": 0, 00:07:21.102 "data_size": 0 00:07:21.102 }, 00:07:21.102 { 00:07:21.102 "name": "BaseBdev2", 00:07:21.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.102 "is_configured": false, 00:07:21.102 "data_offset": 0, 00:07:21.102 "data_size": 0 00:07:21.102 } 00:07:21.102 ] 00:07:21.102 }' 00:07:21.102 10:18:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.102 10:18:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.672 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:21.672 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.672 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.672 [2024-11-19 10:18:35.205540] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:21.672 [2024-11-19 10:18:35.205613] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.673 [2024-11-19 10:18:35.217522] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:21.673 [2024-11-19 10:18:35.217613] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:21.673 [2024-11-19 10:18:35.217640] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:21.673 [2024-11-19 10:18:35.217664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.673 [2024-11-19 10:18:35.264723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:21.673 BaseBdev1 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.673 [ 00:07:21.673 { 00:07:21.673 "name": "BaseBdev1", 00:07:21.673 "aliases": [ 00:07:21.673 "5656fb01-281a-424c-9b3b-49b2c808c3e6" 00:07:21.673 ], 00:07:21.673 "product_name": "Malloc disk", 00:07:21.673 "block_size": 512, 00:07:21.673 "num_blocks": 65536, 00:07:21.673 "uuid": "5656fb01-281a-424c-9b3b-49b2c808c3e6", 00:07:21.673 "assigned_rate_limits": { 00:07:21.673 "rw_ios_per_sec": 0, 00:07:21.673 "rw_mbytes_per_sec": 0, 00:07:21.673 "r_mbytes_per_sec": 0, 00:07:21.673 "w_mbytes_per_sec": 0 00:07:21.673 }, 00:07:21.673 "claimed": true, 00:07:21.673 "claim_type": "exclusive_write", 00:07:21.673 "zoned": false, 00:07:21.673 "supported_io_types": { 00:07:21.673 "read": true, 00:07:21.673 "write": true, 00:07:21.673 "unmap": true, 00:07:21.673 "flush": true, 00:07:21.673 "reset": true, 00:07:21.673 "nvme_admin": false, 00:07:21.673 "nvme_io": false, 00:07:21.673 "nvme_io_md": false, 00:07:21.673 "write_zeroes": true, 00:07:21.673 "zcopy": true, 00:07:21.673 "get_zone_info": false, 00:07:21.673 "zone_management": false, 00:07:21.673 "zone_append": false, 00:07:21.673 "compare": false, 00:07:21.673 "compare_and_write": false, 00:07:21.673 "abort": true, 00:07:21.673 "seek_hole": false, 00:07:21.673 "seek_data": false, 00:07:21.673 "copy": true, 00:07:21.673 "nvme_iov_md": false 00:07:21.673 }, 00:07:21.673 "memory_domains": [ 00:07:21.673 { 00:07:21.673 "dma_device_id": "system", 00:07:21.673 "dma_device_type": 1 00:07:21.673 }, 00:07:21.673 { 00:07:21.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.673 "dma_device_type": 2 00:07:21.673 } 00:07:21.673 ], 00:07:21.673 "driver_specific": {} 00:07:21.673 } 00:07:21.673 ] 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:21.673 "name": "Existed_Raid", 00:07:21.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.673 "strip_size_kb": 64, 00:07:21.673 "state": "configuring", 00:07:21.673 "raid_level": "concat", 00:07:21.673 "superblock": false, 00:07:21.673 "num_base_bdevs": 2, 00:07:21.673 "num_base_bdevs_discovered": 1, 00:07:21.673 "num_base_bdevs_operational": 2, 00:07:21.673 "base_bdevs_list": [ 00:07:21.673 { 00:07:21.673 "name": "BaseBdev1", 00:07:21.673 "uuid": "5656fb01-281a-424c-9b3b-49b2c808c3e6", 00:07:21.673 "is_configured": true, 00:07:21.673 "data_offset": 0, 00:07:21.673 "data_size": 65536 00:07:21.673 }, 00:07:21.673 { 00:07:21.673 "name": "BaseBdev2", 00:07:21.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.673 "is_configured": false, 00:07:21.673 "data_offset": 0, 00:07:21.673 "data_size": 0 00:07:21.673 } 00:07:21.673 ] 00:07:21.673 }' 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:21.673 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.243 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:22.243 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.243 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.243 [2024-11-19 10:18:35.743920] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:22.243 [2024-11-19 10:18:35.744015] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:22.243 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.243 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:22.243 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.243 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.243 [2024-11-19 10:18:35.755939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:22.243 [2024-11-19 10:18:35.757662] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:22.243 [2024-11-19 10:18:35.757735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:22.243 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.243 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:22.243 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:22.243 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:22.243 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.243 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:22.243 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:22.243 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.243 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.243 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.243 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.243 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.243 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.243 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.243 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.243 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.243 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.243 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.243 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.243 "name": "Existed_Raid", 00:07:22.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.243 "strip_size_kb": 64, 00:07:22.243 "state": "configuring", 00:07:22.243 "raid_level": "concat", 00:07:22.243 "superblock": false, 00:07:22.243 "num_base_bdevs": 2, 00:07:22.243 "num_base_bdevs_discovered": 1, 00:07:22.243 "num_base_bdevs_operational": 2, 00:07:22.243 "base_bdevs_list": [ 00:07:22.244 { 00:07:22.244 "name": "BaseBdev1", 00:07:22.244 "uuid": "5656fb01-281a-424c-9b3b-49b2c808c3e6", 00:07:22.244 "is_configured": true, 00:07:22.244 "data_offset": 0, 00:07:22.244 "data_size": 65536 00:07:22.244 }, 00:07:22.244 { 00:07:22.244 "name": "BaseBdev2", 00:07:22.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:22.244 "is_configured": false, 00:07:22.244 "data_offset": 0, 00:07:22.244 "data_size": 0 00:07:22.244 } 00:07:22.244 ] 00:07:22.244 }' 00:07:22.244 10:18:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.244 10:18:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.504 [2024-11-19 10:18:36.227417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:22.504 [2024-11-19 10:18:36.227530] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:22.504 [2024-11-19 10:18:36.227555] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:22.504 [2024-11-19 10:18:36.227858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:22.504 [2024-11-19 10:18:36.228081] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:22.504 [2024-11-19 10:18:36.228132] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev2 00:07:22.504 id_bdev 0x617000007e80 00:07:22.504 [2024-11-19 10:18:36.228412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.504 [ 00:07:22.504 { 00:07:22.504 "name": "BaseBdev2", 00:07:22.504 "aliases": [ 00:07:22.504 "fd4c09da-0ab4-4bac-879b-7d85f2e51495" 00:07:22.504 ], 00:07:22.504 "product_name": "Malloc disk", 00:07:22.504 "block_size": 512, 00:07:22.504 "num_blocks": 65536, 00:07:22.504 "uuid": "fd4c09da-0ab4-4bac-879b-7d85f2e51495", 00:07:22.504 "assigned_rate_limits": { 00:07:22.504 "rw_ios_per_sec": 0, 00:07:22.504 "rw_mbytes_per_sec": 0, 00:07:22.504 "r_mbytes_per_sec": 0, 00:07:22.504 "w_mbytes_per_sec": 0 00:07:22.504 }, 00:07:22.504 "claimed": true, 00:07:22.504 "claim_type": "exclusive_write", 00:07:22.504 "zoned": false, 00:07:22.504 "supported_io_types": { 00:07:22.504 "read": true, 00:07:22.504 "write": true, 00:07:22.504 "unmap": true, 00:07:22.504 "flush": true, 00:07:22.504 "reset": true, 00:07:22.504 "nvme_admin": false, 00:07:22.504 "nvme_io": false, 00:07:22.504 "nvme_io_md": false, 00:07:22.504 "write_zeroes": true, 00:07:22.504 "zcopy": true, 00:07:22.504 "get_zone_info": false, 00:07:22.504 "zone_management": false, 00:07:22.504 "zone_append": false, 00:07:22.504 "compare": false, 00:07:22.504 "compare_and_write": false, 00:07:22.504 "abort": true, 00:07:22.504 "seek_hole": false, 00:07:22.504 "seek_data": false, 00:07:22.504 "copy": true, 00:07:22.504 "nvme_iov_md": false 00:07:22.504 }, 00:07:22.504 "memory_domains": [ 00:07:22.504 { 00:07:22.504 "dma_device_id": "system", 00:07:22.504 "dma_device_type": 1 00:07:22.504 }, 00:07:22.504 { 00:07:22.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:22.504 "dma_device_type": 2 00:07:22.504 } 00:07:22.504 ], 00:07:22.504 "driver_specific": {} 00:07:22.504 } 00:07:22.504 ] 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.504 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.764 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.765 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:22.765 "name": "Existed_Raid", 00:07:22.765 "uuid": "942cb4dc-7824-4480-a0e0-1d85b18d849a", 00:07:22.765 "strip_size_kb": 64, 00:07:22.765 "state": "online", 00:07:22.765 "raid_level": "concat", 00:07:22.765 "superblock": false, 00:07:22.765 "num_base_bdevs": 2, 00:07:22.765 "num_base_bdevs_discovered": 2, 00:07:22.765 "num_base_bdevs_operational": 2, 00:07:22.765 "base_bdevs_list": [ 00:07:22.765 { 00:07:22.765 "name": "BaseBdev1", 00:07:22.765 "uuid": "5656fb01-281a-424c-9b3b-49b2c808c3e6", 00:07:22.765 "is_configured": true, 00:07:22.765 "data_offset": 0, 00:07:22.765 "data_size": 65536 00:07:22.765 }, 00:07:22.765 { 00:07:22.765 "name": "BaseBdev2", 00:07:22.765 "uuid": "fd4c09da-0ab4-4bac-879b-7d85f2e51495", 00:07:22.765 "is_configured": true, 00:07:22.765 "data_offset": 0, 00:07:22.765 "data_size": 65536 00:07:22.765 } 00:07:22.765 ] 00:07:22.765 }' 00:07:22.765 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:22.765 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.025 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:23.025 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:23.025 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:23.025 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:23.025 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:23.025 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:23.025 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:23.025 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:23.025 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.025 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.025 [2024-11-19 10:18:36.698889] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:23.025 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.025 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:23.025 "name": "Existed_Raid", 00:07:23.025 "aliases": [ 00:07:23.025 "942cb4dc-7824-4480-a0e0-1d85b18d849a" 00:07:23.025 ], 00:07:23.025 "product_name": "Raid Volume", 00:07:23.025 "block_size": 512, 00:07:23.025 "num_blocks": 131072, 00:07:23.025 "uuid": "942cb4dc-7824-4480-a0e0-1d85b18d849a", 00:07:23.025 "assigned_rate_limits": { 00:07:23.025 "rw_ios_per_sec": 0, 00:07:23.025 "rw_mbytes_per_sec": 0, 00:07:23.025 "r_mbytes_per_sec": 0, 00:07:23.025 "w_mbytes_per_sec": 0 00:07:23.025 }, 00:07:23.025 "claimed": false, 00:07:23.025 "zoned": false, 00:07:23.025 "supported_io_types": { 00:07:23.025 "read": true, 00:07:23.025 "write": true, 00:07:23.025 "unmap": true, 00:07:23.025 "flush": true, 00:07:23.025 "reset": true, 00:07:23.025 "nvme_admin": false, 00:07:23.025 "nvme_io": false, 00:07:23.025 "nvme_io_md": false, 00:07:23.025 "write_zeroes": true, 00:07:23.025 "zcopy": false, 00:07:23.025 "get_zone_info": false, 00:07:23.025 "zone_management": false, 00:07:23.025 "zone_append": false, 00:07:23.025 "compare": false, 00:07:23.025 "compare_and_write": false, 00:07:23.025 "abort": false, 00:07:23.025 "seek_hole": false, 00:07:23.025 "seek_data": false, 00:07:23.025 "copy": false, 00:07:23.025 "nvme_iov_md": false 00:07:23.025 }, 00:07:23.025 "memory_domains": [ 00:07:23.025 { 00:07:23.025 "dma_device_id": "system", 00:07:23.025 "dma_device_type": 1 00:07:23.025 }, 00:07:23.025 { 00:07:23.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.025 "dma_device_type": 2 00:07:23.025 }, 00:07:23.025 { 00:07:23.025 "dma_device_id": "system", 00:07:23.025 "dma_device_type": 1 00:07:23.025 }, 00:07:23.025 { 00:07:23.025 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:23.025 "dma_device_type": 2 00:07:23.025 } 00:07:23.025 ], 00:07:23.025 "driver_specific": { 00:07:23.025 "raid": { 00:07:23.025 "uuid": "942cb4dc-7824-4480-a0e0-1d85b18d849a", 00:07:23.025 "strip_size_kb": 64, 00:07:23.025 "state": "online", 00:07:23.025 "raid_level": "concat", 00:07:23.025 "superblock": false, 00:07:23.025 "num_base_bdevs": 2, 00:07:23.025 "num_base_bdevs_discovered": 2, 00:07:23.025 "num_base_bdevs_operational": 2, 00:07:23.025 "base_bdevs_list": [ 00:07:23.025 { 00:07:23.025 "name": "BaseBdev1", 00:07:23.025 "uuid": "5656fb01-281a-424c-9b3b-49b2c808c3e6", 00:07:23.025 "is_configured": true, 00:07:23.025 "data_offset": 0, 00:07:23.025 "data_size": 65536 00:07:23.025 }, 00:07:23.025 { 00:07:23.025 "name": "BaseBdev2", 00:07:23.025 "uuid": "fd4c09da-0ab4-4bac-879b-7d85f2e51495", 00:07:23.025 "is_configured": true, 00:07:23.025 "data_offset": 0, 00:07:23.025 "data_size": 65536 00:07:23.025 } 00:07:23.025 ] 00:07:23.025 } 00:07:23.025 } 00:07:23.025 }' 00:07:23.025 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:23.025 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:23.025 BaseBdev2' 00:07:23.025 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.289 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:23.289 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:23.289 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:23.289 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.289 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.289 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.289 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.289 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:23.289 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:23.289 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:23.289 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:23.289 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:23.289 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.289 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.289 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.289 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:23.289 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:23.289 10:18:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:23.289 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.289 10:18:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.289 [2024-11-19 10:18:36.914302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:23.289 [2024-11-19 10:18:36.914370] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:23.289 [2024-11-19 10:18:36.914422] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:23.289 10:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.289 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:23.289 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:23.289 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:23.289 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:23.289 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:23.289 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:23.289 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:23.289 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:23.289 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:23.289 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.289 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:23.289 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.289 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.289 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.289 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.289 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.289 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.289 10:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.289 10:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.289 10:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.289 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.289 "name": "Existed_Raid", 00:07:23.289 "uuid": "942cb4dc-7824-4480-a0e0-1d85b18d849a", 00:07:23.289 "strip_size_kb": 64, 00:07:23.289 "state": "offline", 00:07:23.289 "raid_level": "concat", 00:07:23.289 "superblock": false, 00:07:23.289 "num_base_bdevs": 2, 00:07:23.289 "num_base_bdevs_discovered": 1, 00:07:23.289 "num_base_bdevs_operational": 1, 00:07:23.289 "base_bdevs_list": [ 00:07:23.289 { 00:07:23.289 "name": null, 00:07:23.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:23.289 "is_configured": false, 00:07:23.289 "data_offset": 0, 00:07:23.289 "data_size": 65536 00:07:23.289 }, 00:07:23.289 { 00:07:23.289 "name": "BaseBdev2", 00:07:23.289 "uuid": "fd4c09da-0ab4-4bac-879b-7d85f2e51495", 00:07:23.289 "is_configured": true, 00:07:23.289 "data_offset": 0, 00:07:23.289 "data_size": 65536 00:07:23.289 } 00:07:23.289 ] 00:07:23.289 }' 00:07:23.289 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.289 10:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.864 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:23.864 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:23.864 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.864 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:23.864 10:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.864 10:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.864 10:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.864 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:23.864 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:23.864 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:23.864 10:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.864 10:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.864 [2024-11-19 10:18:37.449166] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:23.864 [2024-11-19 10:18:37.449280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:23.864 10:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.864 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:23.864 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:23.864 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.864 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:23.864 10:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.864 10:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.864 10:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.864 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:23.864 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:23.864 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:23.865 10:18:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61561 00:07:23.865 10:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61561 ']' 00:07:23.865 10:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61561 00:07:23.865 10:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:23.865 10:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.865 10:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61561 00:07:23.865 killing process with pid 61561 00:07:23.865 10:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.865 10:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.865 10:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61561' 00:07:23.865 10:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61561 00:07:23.865 10:18:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61561 00:07:23.865 [2024-11-19 10:18:37.632251] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:24.125 [2024-11-19 10:18:37.649551] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:25.063 00:07:25.063 real 0m4.848s 00:07:25.063 user 0m7.015s 00:07:25.063 sys 0m0.772s 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.063 ************************************ 00:07:25.063 END TEST raid_state_function_test 00:07:25.063 ************************************ 00:07:25.063 10:18:38 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:25.063 10:18:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:25.063 10:18:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.063 10:18:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:25.063 ************************************ 00:07:25.063 START TEST raid_state_function_test_sb 00:07:25.063 ************************************ 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61814 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61814' 00:07:25.063 Process raid pid: 61814 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61814 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61814 ']' 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.063 10:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.064 10:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.064 10:18:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:25.323 [2024-11-19 10:18:38.846786] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:25.323 [2024-11-19 10:18:38.846971] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.323 [2024-11-19 10:18:39.018141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.584 [2024-11-19 10:18:39.124455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.584 [2024-11-19 10:18:39.306871] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:25.584 [2024-11-19 10:18:39.307001] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.156 10:18:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.156 10:18:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:26.156 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:26.156 10:18:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.156 10:18:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.156 [2024-11-19 10:18:39.676188] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:26.156 [2024-11-19 10:18:39.676242] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:26.156 [2024-11-19 10:18:39.676252] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:26.156 [2024-11-19 10:18:39.676262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:26.156 10:18:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.156 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:26.156 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.156 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:26.156 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:26.156 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.156 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.156 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.156 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.156 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.156 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.156 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.156 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.156 10:18:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.156 10:18:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.156 10:18:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.156 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.156 "name": "Existed_Raid", 00:07:26.156 "uuid": "1c70c437-9e8d-40ef-a8c1-6860a04ee403", 00:07:26.156 "strip_size_kb": 64, 00:07:26.156 "state": "configuring", 00:07:26.156 "raid_level": "concat", 00:07:26.156 "superblock": true, 00:07:26.156 "num_base_bdevs": 2, 00:07:26.156 "num_base_bdevs_discovered": 0, 00:07:26.156 "num_base_bdevs_operational": 2, 00:07:26.156 "base_bdevs_list": [ 00:07:26.156 { 00:07:26.156 "name": "BaseBdev1", 00:07:26.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.156 "is_configured": false, 00:07:26.156 "data_offset": 0, 00:07:26.156 "data_size": 0 00:07:26.156 }, 00:07:26.156 { 00:07:26.156 "name": "BaseBdev2", 00:07:26.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.156 "is_configured": false, 00:07:26.156 "data_offset": 0, 00:07:26.156 "data_size": 0 00:07:26.156 } 00:07:26.156 ] 00:07:26.156 }' 00:07:26.156 10:18:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.156 10:18:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.417 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:26.417 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.417 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.417 [2024-11-19 10:18:40.127325] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:26.417 [2024-11-19 10:18:40.127411] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:26.417 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.417 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:26.417 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.417 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.417 [2024-11-19 10:18:40.135318] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:26.417 [2024-11-19 10:18:40.135397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:26.417 [2024-11-19 10:18:40.135427] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:26.417 [2024-11-19 10:18:40.135453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:26.417 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.417 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:26.417 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.417 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.417 [2024-11-19 10:18:40.178531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:26.417 BaseBdev1 00:07:26.417 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.417 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:26.417 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:26.417 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:26.417 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:26.417 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:26.417 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:26.417 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:26.417 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.417 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.417 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.417 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:26.417 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.417 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.678 [ 00:07:26.678 { 00:07:26.678 "name": "BaseBdev1", 00:07:26.678 "aliases": [ 00:07:26.678 "c3bc1f30-25a6-469a-834a-9cc6ea515e01" 00:07:26.678 ], 00:07:26.678 "product_name": "Malloc disk", 00:07:26.678 "block_size": 512, 00:07:26.678 "num_blocks": 65536, 00:07:26.678 "uuid": "c3bc1f30-25a6-469a-834a-9cc6ea515e01", 00:07:26.678 "assigned_rate_limits": { 00:07:26.678 "rw_ios_per_sec": 0, 00:07:26.678 "rw_mbytes_per_sec": 0, 00:07:26.678 "r_mbytes_per_sec": 0, 00:07:26.678 "w_mbytes_per_sec": 0 00:07:26.678 }, 00:07:26.678 "claimed": true, 00:07:26.678 "claim_type": "exclusive_write", 00:07:26.678 "zoned": false, 00:07:26.678 "supported_io_types": { 00:07:26.678 "read": true, 00:07:26.678 "write": true, 00:07:26.678 "unmap": true, 00:07:26.678 "flush": true, 00:07:26.678 "reset": true, 00:07:26.678 "nvme_admin": false, 00:07:26.678 "nvme_io": false, 00:07:26.678 "nvme_io_md": false, 00:07:26.678 "write_zeroes": true, 00:07:26.678 "zcopy": true, 00:07:26.678 "get_zone_info": false, 00:07:26.678 "zone_management": false, 00:07:26.678 "zone_append": false, 00:07:26.678 "compare": false, 00:07:26.678 "compare_and_write": false, 00:07:26.678 "abort": true, 00:07:26.678 "seek_hole": false, 00:07:26.678 "seek_data": false, 00:07:26.678 "copy": true, 00:07:26.678 "nvme_iov_md": false 00:07:26.678 }, 00:07:26.678 "memory_domains": [ 00:07:26.678 { 00:07:26.678 "dma_device_id": "system", 00:07:26.678 "dma_device_type": 1 00:07:26.678 }, 00:07:26.678 { 00:07:26.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.678 "dma_device_type": 2 00:07:26.678 } 00:07:26.678 ], 00:07:26.678 "driver_specific": {} 00:07:26.678 } 00:07:26.678 ] 00:07:26.678 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.678 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:26.678 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:26.678 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.678 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:26.678 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:26.678 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.678 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.678 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.678 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.678 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.678 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.678 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.678 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.678 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.678 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.678 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.678 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.678 "name": "Existed_Raid", 00:07:26.678 "uuid": "9b7888b2-f80e-4c59-b61b-23bb16b0d8c4", 00:07:26.678 "strip_size_kb": 64, 00:07:26.678 "state": "configuring", 00:07:26.678 "raid_level": "concat", 00:07:26.678 "superblock": true, 00:07:26.678 "num_base_bdevs": 2, 00:07:26.678 "num_base_bdevs_discovered": 1, 00:07:26.678 "num_base_bdevs_operational": 2, 00:07:26.678 "base_bdevs_list": [ 00:07:26.678 { 00:07:26.678 "name": "BaseBdev1", 00:07:26.678 "uuid": "c3bc1f30-25a6-469a-834a-9cc6ea515e01", 00:07:26.678 "is_configured": true, 00:07:26.678 "data_offset": 2048, 00:07:26.678 "data_size": 63488 00:07:26.678 }, 00:07:26.678 { 00:07:26.678 "name": "BaseBdev2", 00:07:26.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.678 "is_configured": false, 00:07:26.678 "data_offset": 0, 00:07:26.678 "data_size": 0 00:07:26.678 } 00:07:26.678 ] 00:07:26.678 }' 00:07:26.678 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.679 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.939 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:26.939 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.939 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.939 [2024-11-19 10:18:40.653725] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:26.939 [2024-11-19 10:18:40.653769] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:26.939 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.939 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:26.939 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.939 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.939 [2024-11-19 10:18:40.661768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:26.939 [2024-11-19 10:18:40.663556] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:26.939 [2024-11-19 10:18:40.663632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:26.939 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.939 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:26.939 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:26.939 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:26.939 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.939 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:26.939 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:26.939 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.939 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.939 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.939 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.939 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.939 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.939 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.939 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.939 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.939 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.939 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.199 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.199 "name": "Existed_Raid", 00:07:27.200 "uuid": "80c53b0c-f3a1-4330-bfa3-e4d0d14db6e1", 00:07:27.200 "strip_size_kb": 64, 00:07:27.200 "state": "configuring", 00:07:27.200 "raid_level": "concat", 00:07:27.200 "superblock": true, 00:07:27.200 "num_base_bdevs": 2, 00:07:27.200 "num_base_bdevs_discovered": 1, 00:07:27.200 "num_base_bdevs_operational": 2, 00:07:27.200 "base_bdevs_list": [ 00:07:27.200 { 00:07:27.200 "name": "BaseBdev1", 00:07:27.200 "uuid": "c3bc1f30-25a6-469a-834a-9cc6ea515e01", 00:07:27.200 "is_configured": true, 00:07:27.200 "data_offset": 2048, 00:07:27.200 "data_size": 63488 00:07:27.200 }, 00:07:27.200 { 00:07:27.200 "name": "BaseBdev2", 00:07:27.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:27.200 "is_configured": false, 00:07:27.200 "data_offset": 0, 00:07:27.200 "data_size": 0 00:07:27.200 } 00:07:27.200 ] 00:07:27.200 }' 00:07:27.200 10:18:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.200 10:18:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.460 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:27.460 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.460 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.460 [2024-11-19 10:18:41.165457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:27.460 [2024-11-19 10:18:41.165793] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:27.460 [2024-11-19 10:18:41.165812] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:27.460 [2024-11-19 10:18:41.166091] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:27.460 BaseBdev2 00:07:27.460 [2024-11-19 10:18:41.166239] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:27.460 [2024-11-19 10:18:41.166252] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:27.460 [2024-11-19 10:18:41.166395] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.460 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.460 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:27.460 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:27.460 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:27.460 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:27.460 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:27.460 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:27.460 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:27.460 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.460 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.460 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.460 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:27.460 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.460 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.460 [ 00:07:27.460 { 00:07:27.460 "name": "BaseBdev2", 00:07:27.460 "aliases": [ 00:07:27.460 "56433b46-c6e4-4e4c-8768-a142dd75dda9" 00:07:27.460 ], 00:07:27.460 "product_name": "Malloc disk", 00:07:27.460 "block_size": 512, 00:07:27.460 "num_blocks": 65536, 00:07:27.460 "uuid": "56433b46-c6e4-4e4c-8768-a142dd75dda9", 00:07:27.460 "assigned_rate_limits": { 00:07:27.460 "rw_ios_per_sec": 0, 00:07:27.460 "rw_mbytes_per_sec": 0, 00:07:27.460 "r_mbytes_per_sec": 0, 00:07:27.460 "w_mbytes_per_sec": 0 00:07:27.460 }, 00:07:27.460 "claimed": true, 00:07:27.460 "claim_type": "exclusive_write", 00:07:27.460 "zoned": false, 00:07:27.460 "supported_io_types": { 00:07:27.460 "read": true, 00:07:27.460 "write": true, 00:07:27.460 "unmap": true, 00:07:27.460 "flush": true, 00:07:27.460 "reset": true, 00:07:27.460 "nvme_admin": false, 00:07:27.460 "nvme_io": false, 00:07:27.460 "nvme_io_md": false, 00:07:27.460 "write_zeroes": true, 00:07:27.460 "zcopy": true, 00:07:27.460 "get_zone_info": false, 00:07:27.460 "zone_management": false, 00:07:27.460 "zone_append": false, 00:07:27.460 "compare": false, 00:07:27.460 "compare_and_write": false, 00:07:27.460 "abort": true, 00:07:27.460 "seek_hole": false, 00:07:27.460 "seek_data": false, 00:07:27.460 "copy": true, 00:07:27.460 "nvme_iov_md": false 00:07:27.460 }, 00:07:27.460 "memory_domains": [ 00:07:27.460 { 00:07:27.460 "dma_device_id": "system", 00:07:27.460 "dma_device_type": 1 00:07:27.460 }, 00:07:27.460 { 00:07:27.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.460 "dma_device_type": 2 00:07:27.460 } 00:07:27.460 ], 00:07:27.460 "driver_specific": {} 00:07:27.460 } 00:07:27.460 ] 00:07:27.460 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.461 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:27.461 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:27.461 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:27.461 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:27.461 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:27.461 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:27.461 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:27.461 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:27.461 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:27.461 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:27.461 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:27.461 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:27.461 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:27.461 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.461 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.461 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.461 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:27.461 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.720 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:27.720 "name": "Existed_Raid", 00:07:27.720 "uuid": "80c53b0c-f3a1-4330-bfa3-e4d0d14db6e1", 00:07:27.720 "strip_size_kb": 64, 00:07:27.721 "state": "online", 00:07:27.721 "raid_level": "concat", 00:07:27.721 "superblock": true, 00:07:27.721 "num_base_bdevs": 2, 00:07:27.721 "num_base_bdevs_discovered": 2, 00:07:27.721 "num_base_bdevs_operational": 2, 00:07:27.721 "base_bdevs_list": [ 00:07:27.721 { 00:07:27.721 "name": "BaseBdev1", 00:07:27.721 "uuid": "c3bc1f30-25a6-469a-834a-9cc6ea515e01", 00:07:27.721 "is_configured": true, 00:07:27.721 "data_offset": 2048, 00:07:27.721 "data_size": 63488 00:07:27.721 }, 00:07:27.721 { 00:07:27.721 "name": "BaseBdev2", 00:07:27.721 "uuid": "56433b46-c6e4-4e4c-8768-a142dd75dda9", 00:07:27.721 "is_configured": true, 00:07:27.721 "data_offset": 2048, 00:07:27.721 "data_size": 63488 00:07:27.721 } 00:07:27.721 ] 00:07:27.721 }' 00:07:27.721 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:27.721 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.980 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:27.980 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:27.980 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:27.980 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:27.980 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:27.980 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:27.980 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:27.980 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:27.980 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.980 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.981 [2024-11-19 10:18:41.648875] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:27.981 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.981 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:27.981 "name": "Existed_Raid", 00:07:27.981 "aliases": [ 00:07:27.981 "80c53b0c-f3a1-4330-bfa3-e4d0d14db6e1" 00:07:27.981 ], 00:07:27.981 "product_name": "Raid Volume", 00:07:27.981 "block_size": 512, 00:07:27.981 "num_blocks": 126976, 00:07:27.981 "uuid": "80c53b0c-f3a1-4330-bfa3-e4d0d14db6e1", 00:07:27.981 "assigned_rate_limits": { 00:07:27.981 "rw_ios_per_sec": 0, 00:07:27.981 "rw_mbytes_per_sec": 0, 00:07:27.981 "r_mbytes_per_sec": 0, 00:07:27.981 "w_mbytes_per_sec": 0 00:07:27.981 }, 00:07:27.981 "claimed": false, 00:07:27.981 "zoned": false, 00:07:27.981 "supported_io_types": { 00:07:27.981 "read": true, 00:07:27.981 "write": true, 00:07:27.981 "unmap": true, 00:07:27.981 "flush": true, 00:07:27.981 "reset": true, 00:07:27.981 "nvme_admin": false, 00:07:27.981 "nvme_io": false, 00:07:27.981 "nvme_io_md": false, 00:07:27.981 "write_zeroes": true, 00:07:27.981 "zcopy": false, 00:07:27.981 "get_zone_info": false, 00:07:27.981 "zone_management": false, 00:07:27.981 "zone_append": false, 00:07:27.981 "compare": false, 00:07:27.981 "compare_and_write": false, 00:07:27.981 "abort": false, 00:07:27.981 "seek_hole": false, 00:07:27.981 "seek_data": false, 00:07:27.981 "copy": false, 00:07:27.981 "nvme_iov_md": false 00:07:27.981 }, 00:07:27.981 "memory_domains": [ 00:07:27.981 { 00:07:27.981 "dma_device_id": "system", 00:07:27.981 "dma_device_type": 1 00:07:27.981 }, 00:07:27.981 { 00:07:27.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.981 "dma_device_type": 2 00:07:27.981 }, 00:07:27.981 { 00:07:27.981 "dma_device_id": "system", 00:07:27.981 "dma_device_type": 1 00:07:27.981 }, 00:07:27.981 { 00:07:27.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.981 "dma_device_type": 2 00:07:27.981 } 00:07:27.981 ], 00:07:27.981 "driver_specific": { 00:07:27.981 "raid": { 00:07:27.981 "uuid": "80c53b0c-f3a1-4330-bfa3-e4d0d14db6e1", 00:07:27.981 "strip_size_kb": 64, 00:07:27.981 "state": "online", 00:07:27.981 "raid_level": "concat", 00:07:27.981 "superblock": true, 00:07:27.981 "num_base_bdevs": 2, 00:07:27.981 "num_base_bdevs_discovered": 2, 00:07:27.981 "num_base_bdevs_operational": 2, 00:07:27.981 "base_bdevs_list": [ 00:07:27.981 { 00:07:27.981 "name": "BaseBdev1", 00:07:27.981 "uuid": "c3bc1f30-25a6-469a-834a-9cc6ea515e01", 00:07:27.981 "is_configured": true, 00:07:27.981 "data_offset": 2048, 00:07:27.981 "data_size": 63488 00:07:27.981 }, 00:07:27.981 { 00:07:27.981 "name": "BaseBdev2", 00:07:27.981 "uuid": "56433b46-c6e4-4e4c-8768-a142dd75dda9", 00:07:27.981 "is_configured": true, 00:07:27.981 "data_offset": 2048, 00:07:27.981 "data_size": 63488 00:07:27.981 } 00:07:27.981 ] 00:07:27.981 } 00:07:27.981 } 00:07:27.981 }' 00:07:27.981 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:27.981 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:27.981 BaseBdev2' 00:07:27.981 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.241 [2024-11-19 10:18:41.892268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:28.241 [2024-11-19 10:18:41.892298] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:28.241 [2024-11-19 10:18:41.892343] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.241 10:18:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.241 10:18:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.502 10:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.502 "name": "Existed_Raid", 00:07:28.502 "uuid": "80c53b0c-f3a1-4330-bfa3-e4d0d14db6e1", 00:07:28.502 "strip_size_kb": 64, 00:07:28.502 "state": "offline", 00:07:28.502 "raid_level": "concat", 00:07:28.502 "superblock": true, 00:07:28.502 "num_base_bdevs": 2, 00:07:28.502 "num_base_bdevs_discovered": 1, 00:07:28.502 "num_base_bdevs_operational": 1, 00:07:28.502 "base_bdevs_list": [ 00:07:28.502 { 00:07:28.502 "name": null, 00:07:28.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:28.502 "is_configured": false, 00:07:28.502 "data_offset": 0, 00:07:28.502 "data_size": 63488 00:07:28.502 }, 00:07:28.502 { 00:07:28.502 "name": "BaseBdev2", 00:07:28.502 "uuid": "56433b46-c6e4-4e4c-8768-a142dd75dda9", 00:07:28.502 "is_configured": true, 00:07:28.502 "data_offset": 2048, 00:07:28.502 "data_size": 63488 00:07:28.502 } 00:07:28.502 ] 00:07:28.502 }' 00:07:28.502 10:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.502 10:18:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.763 10:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:28.763 10:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:28.763 10:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.763 10:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:28.763 10:18:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.763 10:18:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.763 10:18:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.763 10:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:28.763 10:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:28.763 10:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:28.763 10:18:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.763 10:18:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.763 [2024-11-19 10:18:42.405714] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:28.763 [2024-11-19 10:18:42.405765] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:28.763 10:18:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.763 10:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:28.763 10:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:28.763 10:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.763 10:18:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.763 10:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:28.763 10:18:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.763 10:18:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.763 10:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:28.763 10:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:28.763 10:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:28.763 10:18:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61814 00:07:28.763 10:18:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61814 ']' 00:07:28.763 10:18:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61814 00:07:28.763 10:18:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:28.763 10:18:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:28.763 10:18:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61814 00:07:29.022 killing process with pid 61814 00:07:29.022 10:18:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.022 10:18:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.022 10:18:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61814' 00:07:29.022 10:18:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61814 00:07:29.022 [2024-11-19 10:18:42.570012] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:29.022 10:18:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61814 00:07:29.022 [2024-11-19 10:18:42.586139] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:30.034 10:18:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:30.034 00:07:30.034 real 0m4.879s 00:07:30.034 user 0m7.075s 00:07:30.034 sys 0m0.793s 00:07:30.034 10:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.034 10:18:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.034 ************************************ 00:07:30.034 END TEST raid_state_function_test_sb 00:07:30.034 ************************************ 00:07:30.034 10:18:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:30.034 10:18:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:30.034 10:18:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.034 10:18:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:30.034 ************************************ 00:07:30.034 START TEST raid_superblock_test 00:07:30.034 ************************************ 00:07:30.034 10:18:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:07:30.034 10:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:30.034 10:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:30.034 10:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:30.034 10:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:30.034 10:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:30.034 10:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:30.034 10:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:30.034 10:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:30.034 10:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:30.034 10:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:30.034 10:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:30.034 10:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:30.034 10:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:30.034 10:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:30.034 10:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:30.034 10:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:30.034 10:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62055 00:07:30.034 10:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:30.034 10:18:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62055 00:07:30.034 10:18:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62055 ']' 00:07:30.034 10:18:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.034 10:18:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.034 10:18:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.034 10:18:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.034 10:18:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.034 [2024-11-19 10:18:43.793804] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:30.034 [2024-11-19 10:18:43.793987] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62055 ] 00:07:30.293 [2024-11-19 10:18:43.964607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.554 [2024-11-19 10:18:44.073367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.554 [2024-11-19 10:18:44.257107] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:30.554 [2024-11-19 10:18:44.257242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.124 10:18:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.124 10:18:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:31.124 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:31.124 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:31.124 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:31.124 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:31.124 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:31.124 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:31.124 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:31.124 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:31.124 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:31.124 10:18:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.124 10:18:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.124 malloc1 00:07:31.124 10:18:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.124 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:31.124 10:18:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.124 10:18:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.124 [2024-11-19 10:18:44.656331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:31.124 [2024-11-19 10:18:44.656395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.124 [2024-11-19 10:18:44.656419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:31.124 [2024-11-19 10:18:44.656427] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.124 [2024-11-19 10:18:44.658422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.124 [2024-11-19 10:18:44.658458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:31.124 pt1 00:07:31.124 10:18:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.124 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:31.124 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:31.124 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.125 malloc2 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.125 [2024-11-19 10:18:44.710350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:31.125 [2024-11-19 10:18:44.710438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.125 [2024-11-19 10:18:44.710474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:31.125 [2024-11-19 10:18:44.710500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.125 [2024-11-19 10:18:44.712519] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.125 [2024-11-19 10:18:44.712611] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:31.125 pt2 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.125 [2024-11-19 10:18:44.722394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:31.125 [2024-11-19 10:18:44.724159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:31.125 [2024-11-19 10:18:44.724356] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:31.125 [2024-11-19 10:18:44.724402] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:31.125 [2024-11-19 10:18:44.724641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:31.125 [2024-11-19 10:18:44.724812] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:31.125 [2024-11-19 10:18:44.724855] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:31.125 [2024-11-19 10:18:44.725040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.125 "name": "raid_bdev1", 00:07:31.125 "uuid": "90dae07f-7cab-458e-b018-77433227a517", 00:07:31.125 "strip_size_kb": 64, 00:07:31.125 "state": "online", 00:07:31.125 "raid_level": "concat", 00:07:31.125 "superblock": true, 00:07:31.125 "num_base_bdevs": 2, 00:07:31.125 "num_base_bdevs_discovered": 2, 00:07:31.125 "num_base_bdevs_operational": 2, 00:07:31.125 "base_bdevs_list": [ 00:07:31.125 { 00:07:31.125 "name": "pt1", 00:07:31.125 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:31.125 "is_configured": true, 00:07:31.125 "data_offset": 2048, 00:07:31.125 "data_size": 63488 00:07:31.125 }, 00:07:31.125 { 00:07:31.125 "name": "pt2", 00:07:31.125 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:31.125 "is_configured": true, 00:07:31.125 "data_offset": 2048, 00:07:31.125 "data_size": 63488 00:07:31.125 } 00:07:31.125 ] 00:07:31.125 }' 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.125 10:18:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.385 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:31.385 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:31.385 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:31.385 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:31.385 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:31.385 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:31.385 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:31.385 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.385 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.385 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:31.385 [2024-11-19 10:18:45.153905] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:31.385 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.645 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:31.645 "name": "raid_bdev1", 00:07:31.645 "aliases": [ 00:07:31.645 "90dae07f-7cab-458e-b018-77433227a517" 00:07:31.645 ], 00:07:31.645 "product_name": "Raid Volume", 00:07:31.645 "block_size": 512, 00:07:31.645 "num_blocks": 126976, 00:07:31.645 "uuid": "90dae07f-7cab-458e-b018-77433227a517", 00:07:31.645 "assigned_rate_limits": { 00:07:31.645 "rw_ios_per_sec": 0, 00:07:31.645 "rw_mbytes_per_sec": 0, 00:07:31.645 "r_mbytes_per_sec": 0, 00:07:31.645 "w_mbytes_per_sec": 0 00:07:31.645 }, 00:07:31.645 "claimed": false, 00:07:31.645 "zoned": false, 00:07:31.645 "supported_io_types": { 00:07:31.645 "read": true, 00:07:31.645 "write": true, 00:07:31.645 "unmap": true, 00:07:31.645 "flush": true, 00:07:31.645 "reset": true, 00:07:31.645 "nvme_admin": false, 00:07:31.645 "nvme_io": false, 00:07:31.645 "nvme_io_md": false, 00:07:31.645 "write_zeroes": true, 00:07:31.645 "zcopy": false, 00:07:31.645 "get_zone_info": false, 00:07:31.645 "zone_management": false, 00:07:31.645 "zone_append": false, 00:07:31.645 "compare": false, 00:07:31.645 "compare_and_write": false, 00:07:31.645 "abort": false, 00:07:31.645 "seek_hole": false, 00:07:31.645 "seek_data": false, 00:07:31.645 "copy": false, 00:07:31.645 "nvme_iov_md": false 00:07:31.645 }, 00:07:31.645 "memory_domains": [ 00:07:31.645 { 00:07:31.645 "dma_device_id": "system", 00:07:31.645 "dma_device_type": 1 00:07:31.645 }, 00:07:31.645 { 00:07:31.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.645 "dma_device_type": 2 00:07:31.645 }, 00:07:31.645 { 00:07:31.645 "dma_device_id": "system", 00:07:31.645 "dma_device_type": 1 00:07:31.645 }, 00:07:31.645 { 00:07:31.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.645 "dma_device_type": 2 00:07:31.645 } 00:07:31.645 ], 00:07:31.645 "driver_specific": { 00:07:31.645 "raid": { 00:07:31.645 "uuid": "90dae07f-7cab-458e-b018-77433227a517", 00:07:31.645 "strip_size_kb": 64, 00:07:31.645 "state": "online", 00:07:31.645 "raid_level": "concat", 00:07:31.645 "superblock": true, 00:07:31.645 "num_base_bdevs": 2, 00:07:31.645 "num_base_bdevs_discovered": 2, 00:07:31.645 "num_base_bdevs_operational": 2, 00:07:31.645 "base_bdevs_list": [ 00:07:31.645 { 00:07:31.645 "name": "pt1", 00:07:31.645 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:31.645 "is_configured": true, 00:07:31.645 "data_offset": 2048, 00:07:31.645 "data_size": 63488 00:07:31.645 }, 00:07:31.645 { 00:07:31.645 "name": "pt2", 00:07:31.645 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:31.645 "is_configured": true, 00:07:31.645 "data_offset": 2048, 00:07:31.645 "data_size": 63488 00:07:31.645 } 00:07:31.645 ] 00:07:31.645 } 00:07:31.645 } 00:07:31.645 }' 00:07:31.645 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:31.645 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:31.645 pt2' 00:07:31.645 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.645 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:31.645 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:31.645 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:31.645 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.645 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.645 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.645 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.645 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:31.645 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:31.645 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:31.645 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:31.645 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.645 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.645 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.645 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.645 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:31.645 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:31.645 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:31.645 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:31.645 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.645 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.645 [2024-11-19 10:18:45.373479] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:31.645 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.645 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=90dae07f-7cab-458e-b018-77433227a517 00:07:31.645 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 90dae07f-7cab-458e-b018-77433227a517 ']' 00:07:31.646 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:31.646 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.646 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.646 [2024-11-19 10:18:45.401185] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:31.646 [2024-11-19 10:18:45.401207] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:31.646 [2024-11-19 10:18:45.401274] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:31.646 [2024-11-19 10:18:45.401316] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:31.646 [2024-11-19 10:18:45.401328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:31.646 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.646 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.646 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.646 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:31.646 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.646 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.906 [2024-11-19 10:18:45.533006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:31.906 [2024-11-19 10:18:45.534699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:31.906 [2024-11-19 10:18:45.534752] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:31.906 [2024-11-19 10:18:45.534797] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:31.906 [2024-11-19 10:18:45.534810] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:31.906 [2024-11-19 10:18:45.534820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:31.906 request: 00:07:31.906 { 00:07:31.906 "name": "raid_bdev1", 00:07:31.906 "raid_level": "concat", 00:07:31.906 "base_bdevs": [ 00:07:31.906 "malloc1", 00:07:31.906 "malloc2" 00:07:31.906 ], 00:07:31.906 "strip_size_kb": 64, 00:07:31.906 "superblock": false, 00:07:31.906 "method": "bdev_raid_create", 00:07:31.906 "req_id": 1 00:07:31.906 } 00:07:31.906 Got JSON-RPC error response 00:07:31.906 response: 00:07:31.906 { 00:07:31.906 "code": -17, 00:07:31.906 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:31.906 } 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.906 [2024-11-19 10:18:45.596865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:31.906 [2024-11-19 10:18:45.596952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:31.906 [2024-11-19 10:18:45.596985] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:31.906 [2024-11-19 10:18:45.597028] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:31.906 [2024-11-19 10:18:45.599125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:31.906 [2024-11-19 10:18:45.599194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:31.906 [2024-11-19 10:18:45.599301] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:31.906 [2024-11-19 10:18:45.599389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:31.906 pt1 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:31.906 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.907 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.907 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.907 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.907 "name": "raid_bdev1", 00:07:31.907 "uuid": "90dae07f-7cab-458e-b018-77433227a517", 00:07:31.907 "strip_size_kb": 64, 00:07:31.907 "state": "configuring", 00:07:31.907 "raid_level": "concat", 00:07:31.907 "superblock": true, 00:07:31.907 "num_base_bdevs": 2, 00:07:31.907 "num_base_bdevs_discovered": 1, 00:07:31.907 "num_base_bdevs_operational": 2, 00:07:31.907 "base_bdevs_list": [ 00:07:31.907 { 00:07:31.907 "name": "pt1", 00:07:31.907 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:31.907 "is_configured": true, 00:07:31.907 "data_offset": 2048, 00:07:31.907 "data_size": 63488 00:07:31.907 }, 00:07:31.907 { 00:07:31.907 "name": null, 00:07:31.907 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:31.907 "is_configured": false, 00:07:31.907 "data_offset": 2048, 00:07:31.907 "data_size": 63488 00:07:31.907 } 00:07:31.907 ] 00:07:31.907 }' 00:07:31.907 10:18:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.907 10:18:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.476 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:32.476 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:32.476 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:32.476 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:32.476 10:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.476 10:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.476 [2024-11-19 10:18:46.056096] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:32.476 [2024-11-19 10:18:46.056153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.476 [2024-11-19 10:18:46.056171] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:32.476 [2024-11-19 10:18:46.056181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.476 [2024-11-19 10:18:46.056561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.476 [2024-11-19 10:18:46.056580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:32.476 [2024-11-19 10:18:46.056643] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:32.476 [2024-11-19 10:18:46.056665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:32.476 [2024-11-19 10:18:46.056766] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:32.476 [2024-11-19 10:18:46.056776] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:32.476 [2024-11-19 10:18:46.056989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:32.476 [2024-11-19 10:18:46.057139] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:32.476 [2024-11-19 10:18:46.057149] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:32.476 [2024-11-19 10:18:46.057269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:32.476 pt2 00:07:32.476 10:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.476 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:32.476 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:32.476 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:32.476 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:32.476 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:32.476 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:32.476 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.476 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.476 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.476 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.476 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.476 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.476 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:32.476 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.476 10:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.476 10:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.476 10:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.476 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.476 "name": "raid_bdev1", 00:07:32.476 "uuid": "90dae07f-7cab-458e-b018-77433227a517", 00:07:32.476 "strip_size_kb": 64, 00:07:32.476 "state": "online", 00:07:32.476 "raid_level": "concat", 00:07:32.476 "superblock": true, 00:07:32.476 "num_base_bdevs": 2, 00:07:32.476 "num_base_bdevs_discovered": 2, 00:07:32.476 "num_base_bdevs_operational": 2, 00:07:32.476 "base_bdevs_list": [ 00:07:32.476 { 00:07:32.476 "name": "pt1", 00:07:32.476 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:32.476 "is_configured": true, 00:07:32.476 "data_offset": 2048, 00:07:32.476 "data_size": 63488 00:07:32.476 }, 00:07:32.476 { 00:07:32.476 "name": "pt2", 00:07:32.476 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:32.476 "is_configured": true, 00:07:32.476 "data_offset": 2048, 00:07:32.476 "data_size": 63488 00:07:32.476 } 00:07:32.476 ] 00:07:32.476 }' 00:07:32.476 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.476 10:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.736 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:32.736 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:32.736 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:32.736 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:32.736 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:32.736 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:32.736 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:32.736 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:32.736 10:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.736 10:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.736 [2024-11-19 10:18:46.499558] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:32.996 10:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.996 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:32.996 "name": "raid_bdev1", 00:07:32.996 "aliases": [ 00:07:32.996 "90dae07f-7cab-458e-b018-77433227a517" 00:07:32.996 ], 00:07:32.996 "product_name": "Raid Volume", 00:07:32.996 "block_size": 512, 00:07:32.996 "num_blocks": 126976, 00:07:32.996 "uuid": "90dae07f-7cab-458e-b018-77433227a517", 00:07:32.996 "assigned_rate_limits": { 00:07:32.996 "rw_ios_per_sec": 0, 00:07:32.996 "rw_mbytes_per_sec": 0, 00:07:32.996 "r_mbytes_per_sec": 0, 00:07:32.996 "w_mbytes_per_sec": 0 00:07:32.996 }, 00:07:32.996 "claimed": false, 00:07:32.996 "zoned": false, 00:07:32.996 "supported_io_types": { 00:07:32.996 "read": true, 00:07:32.996 "write": true, 00:07:32.996 "unmap": true, 00:07:32.996 "flush": true, 00:07:32.996 "reset": true, 00:07:32.996 "nvme_admin": false, 00:07:32.996 "nvme_io": false, 00:07:32.996 "nvme_io_md": false, 00:07:32.996 "write_zeroes": true, 00:07:32.996 "zcopy": false, 00:07:32.996 "get_zone_info": false, 00:07:32.996 "zone_management": false, 00:07:32.996 "zone_append": false, 00:07:32.996 "compare": false, 00:07:32.996 "compare_and_write": false, 00:07:32.996 "abort": false, 00:07:32.996 "seek_hole": false, 00:07:32.996 "seek_data": false, 00:07:32.996 "copy": false, 00:07:32.996 "nvme_iov_md": false 00:07:32.997 }, 00:07:32.997 "memory_domains": [ 00:07:32.997 { 00:07:32.997 "dma_device_id": "system", 00:07:32.997 "dma_device_type": 1 00:07:32.997 }, 00:07:32.997 { 00:07:32.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.997 "dma_device_type": 2 00:07:32.997 }, 00:07:32.997 { 00:07:32.997 "dma_device_id": "system", 00:07:32.997 "dma_device_type": 1 00:07:32.997 }, 00:07:32.997 { 00:07:32.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:32.997 "dma_device_type": 2 00:07:32.997 } 00:07:32.997 ], 00:07:32.997 "driver_specific": { 00:07:32.997 "raid": { 00:07:32.997 "uuid": "90dae07f-7cab-458e-b018-77433227a517", 00:07:32.997 "strip_size_kb": 64, 00:07:32.997 "state": "online", 00:07:32.997 "raid_level": "concat", 00:07:32.997 "superblock": true, 00:07:32.997 "num_base_bdevs": 2, 00:07:32.997 "num_base_bdevs_discovered": 2, 00:07:32.997 "num_base_bdevs_operational": 2, 00:07:32.997 "base_bdevs_list": [ 00:07:32.997 { 00:07:32.997 "name": "pt1", 00:07:32.997 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:32.997 "is_configured": true, 00:07:32.997 "data_offset": 2048, 00:07:32.997 "data_size": 63488 00:07:32.997 }, 00:07:32.997 { 00:07:32.997 "name": "pt2", 00:07:32.997 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:32.997 "is_configured": true, 00:07:32.997 "data_offset": 2048, 00:07:32.997 "data_size": 63488 00:07:32.997 } 00:07:32.997 ] 00:07:32.997 } 00:07:32.997 } 00:07:32.997 }' 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:32.997 pt2' 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.997 [2024-11-19 10:18:46.699184] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 90dae07f-7cab-458e-b018-77433227a517 '!=' 90dae07f-7cab-458e-b018-77433227a517 ']' 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62055 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62055 ']' 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62055 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.997 10:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62055 00:07:33.257 killing process with pid 62055 00:07:33.257 10:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.257 10:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.257 10:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62055' 00:07:33.257 10:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62055 00:07:33.257 [2024-11-19 10:18:46.784769] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:33.257 [2024-11-19 10:18:46.784844] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:33.257 [2024-11-19 10:18:46.784889] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:33.257 [2024-11-19 10:18:46.784899] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:33.257 10:18:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62055 00:07:33.257 [2024-11-19 10:18:46.979771] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:34.637 10:18:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:34.637 00:07:34.637 real 0m4.320s 00:07:34.637 user 0m6.076s 00:07:34.637 sys 0m0.690s 00:07:34.637 ************************************ 00:07:34.637 END TEST raid_superblock_test 00:07:34.637 ************************************ 00:07:34.637 10:18:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.637 10:18:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.637 10:18:48 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:34.637 10:18:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:34.637 10:18:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.637 10:18:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:34.637 ************************************ 00:07:34.637 START TEST raid_read_error_test 00:07:34.637 ************************************ 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.ssfYgC7hNY 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62267 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62267 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62267 ']' 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.637 10:18:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.637 [2024-11-19 10:18:48.192454] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:34.637 [2024-11-19 10:18:48.192583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62267 ] 00:07:34.637 [2024-11-19 10:18:48.362746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.897 [2024-11-19 10:18:48.464991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.897 [2024-11-19 10:18:48.653638] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.897 [2024-11-19 10:18:48.653776] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.466 10:18:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.466 10:18:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:35.466 10:18:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:35.466 10:18:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:35.466 10:18:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.466 10:18:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.466 BaseBdev1_malloc 00:07:35.466 10:18:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.466 10:18:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:35.466 10:18:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.466 10:18:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.466 true 00:07:35.466 10:18:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.466 10:18:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:35.466 10:18:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.466 10:18:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.466 [2024-11-19 10:18:49.075639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:35.466 [2024-11-19 10:18:49.075700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.466 [2024-11-19 10:18:49.075734] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:35.466 [2024-11-19 10:18:49.075744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.466 [2024-11-19 10:18:49.077733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.466 [2024-11-19 10:18:49.077774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:35.466 BaseBdev1 00:07:35.466 10:18:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.466 10:18:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:35.466 10:18:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:35.466 10:18:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.467 BaseBdev2_malloc 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.467 true 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.467 [2024-11-19 10:18:49.141850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:35.467 [2024-11-19 10:18:49.141899] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:35.467 [2024-11-19 10:18:49.141914] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:35.467 [2024-11-19 10:18:49.141923] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:35.467 [2024-11-19 10:18:49.143921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:35.467 [2024-11-19 10:18:49.143961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:35.467 BaseBdev2 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.467 [2024-11-19 10:18:49.153882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:35.467 [2024-11-19 10:18:49.155702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:35.467 [2024-11-19 10:18:49.155886] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:35.467 [2024-11-19 10:18:49.155900] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:35.467 [2024-11-19 10:18:49.156145] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:35.467 [2024-11-19 10:18:49.156302] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:35.467 [2024-11-19 10:18:49.156319] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:35.467 [2024-11-19 10:18:49.156469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.467 "name": "raid_bdev1", 00:07:35.467 "uuid": "1f1228f2-7f38-44a5-a78d-1611b67d7124", 00:07:35.467 "strip_size_kb": 64, 00:07:35.467 "state": "online", 00:07:35.467 "raid_level": "concat", 00:07:35.467 "superblock": true, 00:07:35.467 "num_base_bdevs": 2, 00:07:35.467 "num_base_bdevs_discovered": 2, 00:07:35.467 "num_base_bdevs_operational": 2, 00:07:35.467 "base_bdevs_list": [ 00:07:35.467 { 00:07:35.467 "name": "BaseBdev1", 00:07:35.467 "uuid": "73130d7b-8ce9-5c81-bde5-42cd59411899", 00:07:35.467 "is_configured": true, 00:07:35.467 "data_offset": 2048, 00:07:35.467 "data_size": 63488 00:07:35.467 }, 00:07:35.467 { 00:07:35.467 "name": "BaseBdev2", 00:07:35.467 "uuid": "45bd1d43-5df9-5a7d-8ad7-88e24f2a5dc9", 00:07:35.467 "is_configured": true, 00:07:35.467 "data_offset": 2048, 00:07:35.467 "data_size": 63488 00:07:35.467 } 00:07:35.467 ] 00:07:35.467 }' 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.467 10:18:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.037 10:18:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:36.037 10:18:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:36.037 [2024-11-19 10:18:49.674285] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:36.976 10:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:36.976 10:18:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.976 10:18:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.976 10:18:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.976 10:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:36.976 10:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:36.976 10:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:36.976 10:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:36.976 10:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:36.976 10:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:36.976 10:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:36.976 10:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.976 10:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.976 10:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.976 10:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.976 10:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.976 10:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.976 10:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.976 10:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:36.976 10:18:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.976 10:18:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.976 10:18:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.976 10:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.976 "name": "raid_bdev1", 00:07:36.976 "uuid": "1f1228f2-7f38-44a5-a78d-1611b67d7124", 00:07:36.976 "strip_size_kb": 64, 00:07:36.976 "state": "online", 00:07:36.976 "raid_level": "concat", 00:07:36.976 "superblock": true, 00:07:36.976 "num_base_bdevs": 2, 00:07:36.976 "num_base_bdevs_discovered": 2, 00:07:36.976 "num_base_bdevs_operational": 2, 00:07:36.976 "base_bdevs_list": [ 00:07:36.976 { 00:07:36.976 "name": "BaseBdev1", 00:07:36.976 "uuid": "73130d7b-8ce9-5c81-bde5-42cd59411899", 00:07:36.976 "is_configured": true, 00:07:36.976 "data_offset": 2048, 00:07:36.976 "data_size": 63488 00:07:36.976 }, 00:07:36.976 { 00:07:36.976 "name": "BaseBdev2", 00:07:36.976 "uuid": "45bd1d43-5df9-5a7d-8ad7-88e24f2a5dc9", 00:07:36.976 "is_configured": true, 00:07:36.976 "data_offset": 2048, 00:07:36.976 "data_size": 63488 00:07:36.976 } 00:07:36.976 ] 00:07:36.976 }' 00:07:36.976 10:18:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.976 10:18:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.545 10:18:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:37.545 10:18:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.545 10:18:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.545 [2024-11-19 10:18:51.031533] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:37.545 [2024-11-19 10:18:51.031566] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:37.545 [2024-11-19 10:18:51.034123] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:37.545 [2024-11-19 10:18:51.034174] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.545 [2024-11-19 10:18:51.034206] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:37.545 [2024-11-19 10:18:51.034220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:37.545 { 00:07:37.545 "results": [ 00:07:37.545 { 00:07:37.545 "job": "raid_bdev1", 00:07:37.545 "core_mask": "0x1", 00:07:37.545 "workload": "randrw", 00:07:37.545 "percentage": 50, 00:07:37.545 "status": "finished", 00:07:37.545 "queue_depth": 1, 00:07:37.545 "io_size": 131072, 00:07:37.545 "runtime": 1.358128, 00:07:37.545 "iops": 17289.97561349151, 00:07:37.545 "mibps": 2161.246951686439, 00:07:37.545 "io_failed": 1, 00:07:37.545 "io_timeout": 0, 00:07:37.545 "avg_latency_us": 80.13812359289177, 00:07:37.545 "min_latency_us": 24.705676855895195, 00:07:37.545 "max_latency_us": 1380.8349344978167 00:07:37.545 } 00:07:37.545 ], 00:07:37.545 "core_count": 1 00:07:37.545 } 00:07:37.545 10:18:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.545 10:18:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62267 00:07:37.545 10:18:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62267 ']' 00:07:37.545 10:18:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62267 00:07:37.545 10:18:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:37.545 10:18:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.545 10:18:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62267 00:07:37.545 killing process with pid 62267 00:07:37.545 10:18:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.545 10:18:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.545 10:18:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62267' 00:07:37.545 10:18:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62267 00:07:37.545 [2024-11-19 10:18:51.080559] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:37.545 10:18:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62267 00:07:37.545 [2024-11-19 10:18:51.206990] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:38.967 10:18:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.ssfYgC7hNY 00:07:38.967 10:18:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:38.967 10:18:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:38.967 10:18:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:38.967 10:18:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:38.967 10:18:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:38.967 10:18:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:38.967 10:18:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:38.967 00:07:38.967 real 0m4.216s 00:07:38.967 user 0m5.065s 00:07:38.967 sys 0m0.507s 00:07:38.967 10:18:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.967 ************************************ 00:07:38.967 END TEST raid_read_error_test 00:07:38.967 ************************************ 00:07:38.967 10:18:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.967 10:18:52 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:38.967 10:18:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:38.967 10:18:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.967 10:18:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:38.967 ************************************ 00:07:38.967 START TEST raid_write_error_test 00:07:38.967 ************************************ 00:07:38.967 10:18:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:07:38.967 10:18:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:38.967 10:18:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:38.967 10:18:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:38.967 10:18:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:38.967 10:18:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.967 10:18:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:38.967 10:18:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:38.967 10:18:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.967 10:18:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:38.967 10:18:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:38.967 10:18:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.967 10:18:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:38.967 10:18:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:38.967 10:18:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:38.967 10:18:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:38.967 10:18:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:38.967 10:18:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:38.967 10:18:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:38.967 10:18:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:38.967 10:18:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:38.967 10:18:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:38.967 10:18:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:38.967 10:18:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.u7m84juTYx 00:07:38.967 10:18:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62407 00:07:38.968 10:18:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:38.968 10:18:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62407 00:07:38.968 10:18:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62407 ']' 00:07:38.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.968 10:18:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.968 10:18:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.968 10:18:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.968 10:18:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.968 10:18:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.968 [2024-11-19 10:18:52.477547] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:38.968 [2024-11-19 10:18:52.477676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62407 ] 00:07:38.968 [2024-11-19 10:18:52.649286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.228 [2024-11-19 10:18:52.753515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.228 [2024-11-19 10:18:52.937734] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.228 [2024-11-19 10:18:52.937787] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.800 BaseBdev1_malloc 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.800 true 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.800 [2024-11-19 10:18:53.350267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:39.800 [2024-11-19 10:18:53.350322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.800 [2024-11-19 10:18:53.350369] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:39.800 [2024-11-19 10:18:53.350379] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.800 [2024-11-19 10:18:53.352430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.800 [2024-11-19 10:18:53.352504] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:39.800 BaseBdev1 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.800 BaseBdev2_malloc 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.800 true 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.800 [2024-11-19 10:18:53.415270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:39.800 [2024-11-19 10:18:53.415320] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.800 [2024-11-19 10:18:53.415352] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:39.800 [2024-11-19 10:18:53.415361] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.800 [2024-11-19 10:18:53.417312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.800 [2024-11-19 10:18:53.417401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:39.800 BaseBdev2 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.800 [2024-11-19 10:18:53.427305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:39.800 [2024-11-19 10:18:53.429039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:39.800 [2024-11-19 10:18:53.429214] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:39.800 [2024-11-19 10:18:53.429229] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:39.800 [2024-11-19 10:18:53.429442] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:39.800 [2024-11-19 10:18:53.429604] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:39.800 [2024-11-19 10:18:53.429616] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:39.800 [2024-11-19 10:18:53.429766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.800 "name": "raid_bdev1", 00:07:39.800 "uuid": "4b066071-a29c-4e98-b2ab-92a728410c7c", 00:07:39.800 "strip_size_kb": 64, 00:07:39.800 "state": "online", 00:07:39.800 "raid_level": "concat", 00:07:39.800 "superblock": true, 00:07:39.800 "num_base_bdevs": 2, 00:07:39.800 "num_base_bdevs_discovered": 2, 00:07:39.800 "num_base_bdevs_operational": 2, 00:07:39.800 "base_bdevs_list": [ 00:07:39.800 { 00:07:39.800 "name": "BaseBdev1", 00:07:39.800 "uuid": "81ac79e7-1ca9-5876-9003-6822a4649b56", 00:07:39.800 "is_configured": true, 00:07:39.800 "data_offset": 2048, 00:07:39.800 "data_size": 63488 00:07:39.800 }, 00:07:39.800 { 00:07:39.800 "name": "BaseBdev2", 00:07:39.800 "uuid": "6cd1d335-c2e3-5f74-b12a-651b728b1f09", 00:07:39.800 "is_configured": true, 00:07:39.800 "data_offset": 2048, 00:07:39.800 "data_size": 63488 00:07:39.800 } 00:07:39.800 ] 00:07:39.800 }' 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.800 10:18:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.370 10:18:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:40.370 10:18:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:40.370 [2024-11-19 10:18:53.923723] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:41.311 10:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:41.311 10:18:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.311 10:18:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.311 10:18:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.311 10:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:41.311 10:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:41.311 10:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:41.311 10:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:41.311 10:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:41.311 10:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:41.311 10:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:41.311 10:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.311 10:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.311 10:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.311 10:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.311 10:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.311 10:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.311 10:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.311 10:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:41.311 10:18:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.311 10:18:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.311 10:18:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.311 10:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.311 "name": "raid_bdev1", 00:07:41.311 "uuid": "4b066071-a29c-4e98-b2ab-92a728410c7c", 00:07:41.311 "strip_size_kb": 64, 00:07:41.311 "state": "online", 00:07:41.311 "raid_level": "concat", 00:07:41.311 "superblock": true, 00:07:41.311 "num_base_bdevs": 2, 00:07:41.311 "num_base_bdevs_discovered": 2, 00:07:41.311 "num_base_bdevs_operational": 2, 00:07:41.311 "base_bdevs_list": [ 00:07:41.311 { 00:07:41.311 "name": "BaseBdev1", 00:07:41.311 "uuid": "81ac79e7-1ca9-5876-9003-6822a4649b56", 00:07:41.311 "is_configured": true, 00:07:41.311 "data_offset": 2048, 00:07:41.311 "data_size": 63488 00:07:41.311 }, 00:07:41.311 { 00:07:41.311 "name": "BaseBdev2", 00:07:41.311 "uuid": "6cd1d335-c2e3-5f74-b12a-651b728b1f09", 00:07:41.311 "is_configured": true, 00:07:41.311 "data_offset": 2048, 00:07:41.311 "data_size": 63488 00:07:41.311 } 00:07:41.311 ] 00:07:41.311 }' 00:07:41.311 10:18:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.311 10:18:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.571 10:18:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:41.571 10:18:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.571 10:18:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.571 [2024-11-19 10:18:55.310105] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:41.571 [2024-11-19 10:18:55.310139] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:41.571 [2024-11-19 10:18:55.312669] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:41.571 [2024-11-19 10:18:55.312760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.571 [2024-11-19 10:18:55.312798] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:41.571 [2024-11-19 10:18:55.312812] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:41.571 { 00:07:41.571 "results": [ 00:07:41.571 { 00:07:41.571 "job": "raid_bdev1", 00:07:41.571 "core_mask": "0x1", 00:07:41.571 "workload": "randrw", 00:07:41.571 "percentage": 50, 00:07:41.571 "status": "finished", 00:07:41.571 "queue_depth": 1, 00:07:41.571 "io_size": 131072, 00:07:41.571 "runtime": 1.38735, 00:07:41.571 "iops": 17247.990773777346, 00:07:41.571 "mibps": 2155.9988467221683, 00:07:41.571 "io_failed": 1, 00:07:41.571 "io_timeout": 0, 00:07:41.571 "avg_latency_us": 80.29906265910215, 00:07:41.571 "min_latency_us": 24.146724890829695, 00:07:41.571 "max_latency_us": 1416.6078602620087 00:07:41.571 } 00:07:41.571 ], 00:07:41.571 "core_count": 1 00:07:41.571 } 00:07:41.571 10:18:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.571 10:18:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62407 00:07:41.571 10:18:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62407 ']' 00:07:41.571 10:18:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62407 00:07:41.571 10:18:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:41.571 10:18:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.571 10:18:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62407 00:07:41.571 killing process with pid 62407 00:07:41.571 10:18:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.571 10:18:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.571 10:18:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62407' 00:07:41.571 10:18:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62407 00:07:41.571 [2024-11-19 10:18:55.344562] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:41.571 10:18:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62407 00:07:41.831 [2024-11-19 10:18:55.471122] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:43.214 10:18:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.u7m84juTYx 00:07:43.214 10:18:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:43.214 10:18:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:43.214 10:18:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:43.214 10:18:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:43.214 ************************************ 00:07:43.214 END TEST raid_write_error_test 00:07:43.214 ************************************ 00:07:43.214 10:18:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:43.214 10:18:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:43.214 10:18:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:43.214 00:07:43.214 real 0m4.191s 00:07:43.214 user 0m5.017s 00:07:43.214 sys 0m0.495s 00:07:43.214 10:18:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.214 10:18:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.214 10:18:56 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:43.214 10:18:56 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:43.214 10:18:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:43.214 10:18:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.214 10:18:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:43.214 ************************************ 00:07:43.214 START TEST raid_state_function_test 00:07:43.214 ************************************ 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62545 00:07:43.214 Process raid pid: 62545 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62545' 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62545 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62545 ']' 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.214 10:18:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.214 [2024-11-19 10:18:56.727054] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:43.214 [2024-11-19 10:18:56.727269] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.214 [2024-11-19 10:18:56.901655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.474 [2024-11-19 10:18:57.008577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.474 [2024-11-19 10:18:57.199589] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.474 [2024-11-19 10:18:57.199671] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:44.045 10:18:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.045 10:18:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:44.045 10:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:44.045 10:18:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.045 10:18:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.045 [2024-11-19 10:18:57.555372] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:44.045 [2024-11-19 10:18:57.555489] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:44.045 [2024-11-19 10:18:57.555504] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:44.045 [2024-11-19 10:18:57.555514] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:44.045 10:18:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.045 10:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:44.045 10:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.045 10:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.045 10:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:44.045 10:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:44.045 10:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.045 10:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.045 10:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.045 10:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.045 10:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.045 10:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.045 10:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.045 10:18:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.045 10:18:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.045 10:18:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.045 10:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.045 "name": "Existed_Raid", 00:07:44.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.045 "strip_size_kb": 0, 00:07:44.045 "state": "configuring", 00:07:44.045 "raid_level": "raid1", 00:07:44.045 "superblock": false, 00:07:44.045 "num_base_bdevs": 2, 00:07:44.045 "num_base_bdevs_discovered": 0, 00:07:44.045 "num_base_bdevs_operational": 2, 00:07:44.045 "base_bdevs_list": [ 00:07:44.045 { 00:07:44.045 "name": "BaseBdev1", 00:07:44.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.045 "is_configured": false, 00:07:44.045 "data_offset": 0, 00:07:44.045 "data_size": 0 00:07:44.045 }, 00:07:44.045 { 00:07:44.045 "name": "BaseBdev2", 00:07:44.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.045 "is_configured": false, 00:07:44.045 "data_offset": 0, 00:07:44.045 "data_size": 0 00:07:44.045 } 00:07:44.045 ] 00:07:44.045 }' 00:07:44.045 10:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.045 10:18:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.305 10:18:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:44.305 10:18:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.305 10:18:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.305 [2024-11-19 10:18:58.002547] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:44.305 [2024-11-19 10:18:58.002622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:44.305 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.305 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:44.305 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.305 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.305 [2024-11-19 10:18:58.010534] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:44.305 [2024-11-19 10:18:58.010611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:44.305 [2024-11-19 10:18:58.010638] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:44.305 [2024-11-19 10:18:58.010662] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:44.305 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.305 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:44.305 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.305 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.305 [2024-11-19 10:18:58.054513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:44.305 BaseBdev1 00:07:44.305 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.305 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:44.305 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:44.305 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:44.305 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:44.306 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:44.306 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:44.306 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:44.306 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.306 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.306 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.306 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:44.306 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.306 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.306 [ 00:07:44.306 { 00:07:44.306 "name": "BaseBdev1", 00:07:44.306 "aliases": [ 00:07:44.306 "ab65c4a6-53bf-4689-a4b1-5269b2143f65" 00:07:44.306 ], 00:07:44.306 "product_name": "Malloc disk", 00:07:44.306 "block_size": 512, 00:07:44.306 "num_blocks": 65536, 00:07:44.306 "uuid": "ab65c4a6-53bf-4689-a4b1-5269b2143f65", 00:07:44.306 "assigned_rate_limits": { 00:07:44.306 "rw_ios_per_sec": 0, 00:07:44.306 "rw_mbytes_per_sec": 0, 00:07:44.306 "r_mbytes_per_sec": 0, 00:07:44.306 "w_mbytes_per_sec": 0 00:07:44.306 }, 00:07:44.306 "claimed": true, 00:07:44.306 "claim_type": "exclusive_write", 00:07:44.306 "zoned": false, 00:07:44.306 "supported_io_types": { 00:07:44.306 "read": true, 00:07:44.306 "write": true, 00:07:44.306 "unmap": true, 00:07:44.306 "flush": true, 00:07:44.306 "reset": true, 00:07:44.306 "nvme_admin": false, 00:07:44.306 "nvme_io": false, 00:07:44.566 "nvme_io_md": false, 00:07:44.566 "write_zeroes": true, 00:07:44.566 "zcopy": true, 00:07:44.566 "get_zone_info": false, 00:07:44.566 "zone_management": false, 00:07:44.566 "zone_append": false, 00:07:44.566 "compare": false, 00:07:44.566 "compare_and_write": false, 00:07:44.566 "abort": true, 00:07:44.566 "seek_hole": false, 00:07:44.566 "seek_data": false, 00:07:44.566 "copy": true, 00:07:44.566 "nvme_iov_md": false 00:07:44.566 }, 00:07:44.566 "memory_domains": [ 00:07:44.566 { 00:07:44.566 "dma_device_id": "system", 00:07:44.566 "dma_device_type": 1 00:07:44.566 }, 00:07:44.566 { 00:07:44.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.566 "dma_device_type": 2 00:07:44.566 } 00:07:44.566 ], 00:07:44.566 "driver_specific": {} 00:07:44.566 } 00:07:44.566 ] 00:07:44.566 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.566 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:44.566 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:44.566 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.567 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.567 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:44.567 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:44.567 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.567 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.567 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.567 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.567 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.567 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.567 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.567 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.567 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.567 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.567 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.567 "name": "Existed_Raid", 00:07:44.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.567 "strip_size_kb": 0, 00:07:44.567 "state": "configuring", 00:07:44.567 "raid_level": "raid1", 00:07:44.567 "superblock": false, 00:07:44.567 "num_base_bdevs": 2, 00:07:44.567 "num_base_bdevs_discovered": 1, 00:07:44.567 "num_base_bdevs_operational": 2, 00:07:44.567 "base_bdevs_list": [ 00:07:44.567 { 00:07:44.567 "name": "BaseBdev1", 00:07:44.567 "uuid": "ab65c4a6-53bf-4689-a4b1-5269b2143f65", 00:07:44.567 "is_configured": true, 00:07:44.567 "data_offset": 0, 00:07:44.567 "data_size": 65536 00:07:44.567 }, 00:07:44.567 { 00:07:44.567 "name": "BaseBdev2", 00:07:44.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.567 "is_configured": false, 00:07:44.567 "data_offset": 0, 00:07:44.567 "data_size": 0 00:07:44.567 } 00:07:44.567 ] 00:07:44.567 }' 00:07:44.567 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.567 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.827 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:44.827 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.827 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.827 [2024-11-19 10:18:58.521712] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:44.827 [2024-11-19 10:18:58.521752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:44.827 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.827 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:44.827 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.827 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.827 [2024-11-19 10:18:58.529741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:44.827 [2024-11-19 10:18:58.531465] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:44.827 [2024-11-19 10:18:58.531553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:44.827 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.827 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:44.827 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:44.827 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:44.828 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:44.828 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:44.828 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:44.828 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:44.828 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.828 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.828 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.828 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.828 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.828 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.828 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.828 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.828 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.828 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.828 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.828 "name": "Existed_Raid", 00:07:44.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.828 "strip_size_kb": 0, 00:07:44.828 "state": "configuring", 00:07:44.828 "raid_level": "raid1", 00:07:44.828 "superblock": false, 00:07:44.828 "num_base_bdevs": 2, 00:07:44.828 "num_base_bdevs_discovered": 1, 00:07:44.828 "num_base_bdevs_operational": 2, 00:07:44.828 "base_bdevs_list": [ 00:07:44.828 { 00:07:44.828 "name": "BaseBdev1", 00:07:44.828 "uuid": "ab65c4a6-53bf-4689-a4b1-5269b2143f65", 00:07:44.828 "is_configured": true, 00:07:44.828 "data_offset": 0, 00:07:44.828 "data_size": 65536 00:07:44.828 }, 00:07:44.828 { 00:07:44.828 "name": "BaseBdev2", 00:07:44.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.828 "is_configured": false, 00:07:44.828 "data_offset": 0, 00:07:44.828 "data_size": 0 00:07:44.828 } 00:07:44.828 ] 00:07:44.828 }' 00:07:44.828 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.828 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.400 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:45.400 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.400 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.400 [2024-11-19 10:18:58.986382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:45.400 [2024-11-19 10:18:58.986426] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:45.400 [2024-11-19 10:18:58.986434] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:45.400 [2024-11-19 10:18:58.986686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:45.400 [2024-11-19 10:18:58.986842] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:45.400 [2024-11-19 10:18:58.986856] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:45.400 [2024-11-19 10:18:58.987144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.400 BaseBdev2 00:07:45.400 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.400 10:18:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:45.400 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:45.400 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:45.400 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:45.400 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:45.400 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:45.400 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:45.400 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.400 10:18:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.400 10:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.400 10:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:45.400 10:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.400 10:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.400 [ 00:07:45.400 { 00:07:45.400 "name": "BaseBdev2", 00:07:45.400 "aliases": [ 00:07:45.400 "9c076ef7-66cf-47db-b908-ad279e96a376" 00:07:45.400 ], 00:07:45.400 "product_name": "Malloc disk", 00:07:45.400 "block_size": 512, 00:07:45.400 "num_blocks": 65536, 00:07:45.400 "uuid": "9c076ef7-66cf-47db-b908-ad279e96a376", 00:07:45.400 "assigned_rate_limits": { 00:07:45.400 "rw_ios_per_sec": 0, 00:07:45.400 "rw_mbytes_per_sec": 0, 00:07:45.400 "r_mbytes_per_sec": 0, 00:07:45.400 "w_mbytes_per_sec": 0 00:07:45.400 }, 00:07:45.400 "claimed": true, 00:07:45.400 "claim_type": "exclusive_write", 00:07:45.400 "zoned": false, 00:07:45.400 "supported_io_types": { 00:07:45.400 "read": true, 00:07:45.400 "write": true, 00:07:45.400 "unmap": true, 00:07:45.400 "flush": true, 00:07:45.400 "reset": true, 00:07:45.400 "nvme_admin": false, 00:07:45.400 "nvme_io": false, 00:07:45.400 "nvme_io_md": false, 00:07:45.400 "write_zeroes": true, 00:07:45.400 "zcopy": true, 00:07:45.400 "get_zone_info": false, 00:07:45.400 "zone_management": false, 00:07:45.400 "zone_append": false, 00:07:45.400 "compare": false, 00:07:45.400 "compare_and_write": false, 00:07:45.400 "abort": true, 00:07:45.400 "seek_hole": false, 00:07:45.400 "seek_data": false, 00:07:45.400 "copy": true, 00:07:45.400 "nvme_iov_md": false 00:07:45.400 }, 00:07:45.400 "memory_domains": [ 00:07:45.400 { 00:07:45.400 "dma_device_id": "system", 00:07:45.400 "dma_device_type": 1 00:07:45.400 }, 00:07:45.400 { 00:07:45.400 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.400 "dma_device_type": 2 00:07:45.400 } 00:07:45.400 ], 00:07:45.400 "driver_specific": {} 00:07:45.400 } 00:07:45.400 ] 00:07:45.400 10:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.400 10:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:45.400 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:45.400 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:45.400 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:45.400 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:45.400 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:45.400 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:45.400 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:45.400 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.400 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.401 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.401 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.401 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.401 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.401 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:45.401 10:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.401 10:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.401 10:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.401 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.401 "name": "Existed_Raid", 00:07:45.401 "uuid": "c9a687ea-a85d-48c6-865d-98c3c5f11076", 00:07:45.401 "strip_size_kb": 0, 00:07:45.401 "state": "online", 00:07:45.401 "raid_level": "raid1", 00:07:45.401 "superblock": false, 00:07:45.401 "num_base_bdevs": 2, 00:07:45.401 "num_base_bdevs_discovered": 2, 00:07:45.401 "num_base_bdevs_operational": 2, 00:07:45.401 "base_bdevs_list": [ 00:07:45.401 { 00:07:45.401 "name": "BaseBdev1", 00:07:45.401 "uuid": "ab65c4a6-53bf-4689-a4b1-5269b2143f65", 00:07:45.401 "is_configured": true, 00:07:45.401 "data_offset": 0, 00:07:45.401 "data_size": 65536 00:07:45.401 }, 00:07:45.401 { 00:07:45.401 "name": "BaseBdev2", 00:07:45.401 "uuid": "9c076ef7-66cf-47db-b908-ad279e96a376", 00:07:45.401 "is_configured": true, 00:07:45.401 "data_offset": 0, 00:07:45.401 "data_size": 65536 00:07:45.401 } 00:07:45.401 ] 00:07:45.401 }' 00:07:45.401 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.401 10:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.978 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:45.978 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:45.978 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:45.978 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:45.978 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:45.978 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:45.978 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:45.978 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:45.978 10:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.978 10:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.978 [2024-11-19 10:18:59.489807] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:45.978 10:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.978 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:45.978 "name": "Existed_Raid", 00:07:45.978 "aliases": [ 00:07:45.978 "c9a687ea-a85d-48c6-865d-98c3c5f11076" 00:07:45.978 ], 00:07:45.978 "product_name": "Raid Volume", 00:07:45.978 "block_size": 512, 00:07:45.978 "num_blocks": 65536, 00:07:45.978 "uuid": "c9a687ea-a85d-48c6-865d-98c3c5f11076", 00:07:45.978 "assigned_rate_limits": { 00:07:45.978 "rw_ios_per_sec": 0, 00:07:45.978 "rw_mbytes_per_sec": 0, 00:07:45.978 "r_mbytes_per_sec": 0, 00:07:45.978 "w_mbytes_per_sec": 0 00:07:45.978 }, 00:07:45.978 "claimed": false, 00:07:45.978 "zoned": false, 00:07:45.978 "supported_io_types": { 00:07:45.978 "read": true, 00:07:45.978 "write": true, 00:07:45.978 "unmap": false, 00:07:45.979 "flush": false, 00:07:45.979 "reset": true, 00:07:45.979 "nvme_admin": false, 00:07:45.979 "nvme_io": false, 00:07:45.979 "nvme_io_md": false, 00:07:45.979 "write_zeroes": true, 00:07:45.979 "zcopy": false, 00:07:45.979 "get_zone_info": false, 00:07:45.979 "zone_management": false, 00:07:45.979 "zone_append": false, 00:07:45.979 "compare": false, 00:07:45.979 "compare_and_write": false, 00:07:45.979 "abort": false, 00:07:45.979 "seek_hole": false, 00:07:45.979 "seek_data": false, 00:07:45.979 "copy": false, 00:07:45.979 "nvme_iov_md": false 00:07:45.979 }, 00:07:45.979 "memory_domains": [ 00:07:45.979 { 00:07:45.979 "dma_device_id": "system", 00:07:45.979 "dma_device_type": 1 00:07:45.979 }, 00:07:45.979 { 00:07:45.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.979 "dma_device_type": 2 00:07:45.979 }, 00:07:45.979 { 00:07:45.979 "dma_device_id": "system", 00:07:45.979 "dma_device_type": 1 00:07:45.979 }, 00:07:45.979 { 00:07:45.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:45.979 "dma_device_type": 2 00:07:45.979 } 00:07:45.979 ], 00:07:45.979 "driver_specific": { 00:07:45.979 "raid": { 00:07:45.979 "uuid": "c9a687ea-a85d-48c6-865d-98c3c5f11076", 00:07:45.979 "strip_size_kb": 0, 00:07:45.979 "state": "online", 00:07:45.979 "raid_level": "raid1", 00:07:45.979 "superblock": false, 00:07:45.979 "num_base_bdevs": 2, 00:07:45.979 "num_base_bdevs_discovered": 2, 00:07:45.979 "num_base_bdevs_operational": 2, 00:07:45.979 "base_bdevs_list": [ 00:07:45.979 { 00:07:45.979 "name": "BaseBdev1", 00:07:45.979 "uuid": "ab65c4a6-53bf-4689-a4b1-5269b2143f65", 00:07:45.979 "is_configured": true, 00:07:45.979 "data_offset": 0, 00:07:45.979 "data_size": 65536 00:07:45.979 }, 00:07:45.979 { 00:07:45.979 "name": "BaseBdev2", 00:07:45.979 "uuid": "9c076ef7-66cf-47db-b908-ad279e96a376", 00:07:45.979 "is_configured": true, 00:07:45.979 "data_offset": 0, 00:07:45.979 "data_size": 65536 00:07:45.979 } 00:07:45.979 ] 00:07:45.979 } 00:07:45.979 } 00:07:45.979 }' 00:07:45.979 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:45.979 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:45.979 BaseBdev2' 00:07:45.979 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.979 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:45.979 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:45.979 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:45.979 10:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.979 10:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.979 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.979 10:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.979 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:45.979 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:45.979 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:45.979 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:45.979 10:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.979 10:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.979 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:45.979 10:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.979 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:45.979 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:45.979 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:45.979 10:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.979 10:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.979 [2024-11-19 10:18:59.709213] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:46.240 10:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.240 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:46.240 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:46.240 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:46.240 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:46.240 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:46.240 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:46.240 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:46.240 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.240 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:46.240 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:46.240 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:46.240 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.240 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.240 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.240 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.240 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.240 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.240 10:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.240 10:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.240 10:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.240 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.240 "name": "Existed_Raid", 00:07:46.240 "uuid": "c9a687ea-a85d-48c6-865d-98c3c5f11076", 00:07:46.240 "strip_size_kb": 0, 00:07:46.240 "state": "online", 00:07:46.240 "raid_level": "raid1", 00:07:46.240 "superblock": false, 00:07:46.240 "num_base_bdevs": 2, 00:07:46.240 "num_base_bdevs_discovered": 1, 00:07:46.240 "num_base_bdevs_operational": 1, 00:07:46.240 "base_bdevs_list": [ 00:07:46.240 { 00:07:46.240 "name": null, 00:07:46.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.240 "is_configured": false, 00:07:46.240 "data_offset": 0, 00:07:46.240 "data_size": 65536 00:07:46.240 }, 00:07:46.240 { 00:07:46.240 "name": "BaseBdev2", 00:07:46.240 "uuid": "9c076ef7-66cf-47db-b908-ad279e96a376", 00:07:46.240 "is_configured": true, 00:07:46.240 "data_offset": 0, 00:07:46.240 "data_size": 65536 00:07:46.240 } 00:07:46.240 ] 00:07:46.240 }' 00:07:46.240 10:18:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.241 10:18:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.501 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:46.501 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:46.501 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.501 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:46.501 10:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.501 10:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.501 10:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.501 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:46.501 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:46.501 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:46.501 10:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.501 10:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.501 [2024-11-19 10:19:00.263535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:46.501 [2024-11-19 10:19:00.263628] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:46.762 [2024-11-19 10:19:00.355461] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:46.762 [2024-11-19 10:19:00.355572] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:46.762 [2024-11-19 10:19:00.355590] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:46.762 10:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.762 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:46.762 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:46.762 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.762 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:46.762 10:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.762 10:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.762 10:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.762 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:46.762 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:46.762 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:46.762 10:19:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62545 00:07:46.762 10:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62545 ']' 00:07:46.762 10:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62545 00:07:46.762 10:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:46.762 10:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.762 10:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62545 00:07:46.762 killing process with pid 62545 00:07:46.762 10:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.762 10:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.762 10:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62545' 00:07:46.762 10:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62545 00:07:46.762 [2024-11-19 10:19:00.436338] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:46.762 10:19:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62545 00:07:46.762 [2024-11-19 10:19:00.452396] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:48.185 10:19:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:48.185 00:07:48.185 real 0m4.857s 00:07:48.185 user 0m7.052s 00:07:48.185 sys 0m0.761s 00:07:48.185 10:19:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.186 ************************************ 00:07:48.186 END TEST raid_state_function_test 00:07:48.186 ************************************ 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.186 10:19:01 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:48.186 10:19:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:48.186 10:19:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.186 10:19:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:48.186 ************************************ 00:07:48.186 START TEST raid_state_function_test_sb 00:07:48.186 ************************************ 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62798 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62798' 00:07:48.186 Process raid pid: 62798 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62798 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62798 ']' 00:07:48.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.186 10:19:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.186 [2024-11-19 10:19:01.651573] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:48.186 [2024-11-19 10:19:01.651762] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.186 [2024-11-19 10:19:01.809245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.186 [2024-11-19 10:19:01.914025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.447 [2024-11-19 10:19:02.108186] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.447 [2024-11-19 10:19:02.108306] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.707 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.708 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:48.708 10:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:48.708 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.708 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.708 [2024-11-19 10:19:02.477889] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:48.708 [2024-11-19 10:19:02.477942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:48.708 [2024-11-19 10:19:02.477953] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:48.708 [2024-11-19 10:19:02.477962] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:48.708 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.708 10:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:48.708 10:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:48.708 10:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:48.708 10:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:48.708 10:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:48.708 10:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.708 10:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.708 10:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.708 10:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.708 10:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.968 10:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.969 10:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.969 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.969 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:48.969 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.969 10:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.969 "name": "Existed_Raid", 00:07:48.969 "uuid": "dfda3000-95e5-4ec2-8fa5-bde47daa321c", 00:07:48.969 "strip_size_kb": 0, 00:07:48.969 "state": "configuring", 00:07:48.969 "raid_level": "raid1", 00:07:48.969 "superblock": true, 00:07:48.969 "num_base_bdevs": 2, 00:07:48.969 "num_base_bdevs_discovered": 0, 00:07:48.969 "num_base_bdevs_operational": 2, 00:07:48.969 "base_bdevs_list": [ 00:07:48.969 { 00:07:48.969 "name": "BaseBdev1", 00:07:48.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.969 "is_configured": false, 00:07:48.969 "data_offset": 0, 00:07:48.969 "data_size": 0 00:07:48.969 }, 00:07:48.969 { 00:07:48.969 "name": "BaseBdev2", 00:07:48.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.969 "is_configured": false, 00:07:48.969 "data_offset": 0, 00:07:48.969 "data_size": 0 00:07:48.969 } 00:07:48.969 ] 00:07:48.969 }' 00:07:48.969 10:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.969 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.230 10:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:49.230 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.230 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.230 [2024-11-19 10:19:02.893107] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:49.230 [2024-11-19 10:19:02.893185] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:49.230 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.230 10:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:49.230 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.230 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.230 [2024-11-19 10:19:02.905095] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:49.230 [2024-11-19 10:19:02.905170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:49.230 [2024-11-19 10:19:02.905197] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:49.230 [2024-11-19 10:19:02.905221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:49.230 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.230 10:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:49.230 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.230 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.230 [2024-11-19 10:19:02.950254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:49.230 BaseBdev1 00:07:49.230 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.230 10:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:49.230 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:49.230 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:49.230 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:49.230 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:49.230 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:49.230 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:49.231 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.231 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.231 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.231 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:49.231 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.231 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.231 [ 00:07:49.231 { 00:07:49.231 "name": "BaseBdev1", 00:07:49.231 "aliases": [ 00:07:49.231 "a7fb6da9-601a-43bc-bd86-61681373c575" 00:07:49.231 ], 00:07:49.231 "product_name": "Malloc disk", 00:07:49.231 "block_size": 512, 00:07:49.231 "num_blocks": 65536, 00:07:49.231 "uuid": "a7fb6da9-601a-43bc-bd86-61681373c575", 00:07:49.231 "assigned_rate_limits": { 00:07:49.231 "rw_ios_per_sec": 0, 00:07:49.231 "rw_mbytes_per_sec": 0, 00:07:49.231 "r_mbytes_per_sec": 0, 00:07:49.231 "w_mbytes_per_sec": 0 00:07:49.231 }, 00:07:49.231 "claimed": true, 00:07:49.231 "claim_type": "exclusive_write", 00:07:49.231 "zoned": false, 00:07:49.231 "supported_io_types": { 00:07:49.231 "read": true, 00:07:49.231 "write": true, 00:07:49.231 "unmap": true, 00:07:49.231 "flush": true, 00:07:49.231 "reset": true, 00:07:49.231 "nvme_admin": false, 00:07:49.231 "nvme_io": false, 00:07:49.231 "nvme_io_md": false, 00:07:49.231 "write_zeroes": true, 00:07:49.231 "zcopy": true, 00:07:49.231 "get_zone_info": false, 00:07:49.231 "zone_management": false, 00:07:49.231 "zone_append": false, 00:07:49.231 "compare": false, 00:07:49.231 "compare_and_write": false, 00:07:49.231 "abort": true, 00:07:49.231 "seek_hole": false, 00:07:49.231 "seek_data": false, 00:07:49.231 "copy": true, 00:07:49.231 "nvme_iov_md": false 00:07:49.231 }, 00:07:49.231 "memory_domains": [ 00:07:49.231 { 00:07:49.231 "dma_device_id": "system", 00:07:49.231 "dma_device_type": 1 00:07:49.231 }, 00:07:49.231 { 00:07:49.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.231 "dma_device_type": 2 00:07:49.231 } 00:07:49.231 ], 00:07:49.231 "driver_specific": {} 00:07:49.231 } 00:07:49.231 ] 00:07:49.231 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.231 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:49.231 10:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:49.231 10:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.231 10:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:49.231 10:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:49.231 10:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:49.231 10:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.231 10:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.231 10:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.231 10:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.231 10:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.231 10:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.231 10:19:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.231 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.231 10:19:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.491 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.491 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.491 "name": "Existed_Raid", 00:07:49.491 "uuid": "4eb24e5a-7fea-4750-a12f-be8fd68cbae7", 00:07:49.491 "strip_size_kb": 0, 00:07:49.491 "state": "configuring", 00:07:49.491 "raid_level": "raid1", 00:07:49.491 "superblock": true, 00:07:49.491 "num_base_bdevs": 2, 00:07:49.491 "num_base_bdevs_discovered": 1, 00:07:49.491 "num_base_bdevs_operational": 2, 00:07:49.491 "base_bdevs_list": [ 00:07:49.491 { 00:07:49.491 "name": "BaseBdev1", 00:07:49.491 "uuid": "a7fb6da9-601a-43bc-bd86-61681373c575", 00:07:49.491 "is_configured": true, 00:07:49.491 "data_offset": 2048, 00:07:49.491 "data_size": 63488 00:07:49.491 }, 00:07:49.491 { 00:07:49.491 "name": "BaseBdev2", 00:07:49.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.491 "is_configured": false, 00:07:49.491 "data_offset": 0, 00:07:49.492 "data_size": 0 00:07:49.492 } 00:07:49.492 ] 00:07:49.492 }' 00:07:49.492 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.492 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.752 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:49.752 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.752 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.752 [2024-11-19 10:19:03.433449] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:49.752 [2024-11-19 10:19:03.433491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:49.752 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.752 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:49.752 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.752 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.752 [2024-11-19 10:19:03.445472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:49.752 [2024-11-19 10:19:03.447242] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:49.752 [2024-11-19 10:19:03.447284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:49.752 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.753 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:49.753 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:49.753 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:49.753 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.753 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:49.753 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:49.753 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:49.753 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.753 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.753 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.753 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.753 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.753 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.753 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.753 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.753 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.753 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.753 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.753 "name": "Existed_Raid", 00:07:49.753 "uuid": "9a613694-f9eb-4944-890d-d76a8a6d1c7f", 00:07:49.753 "strip_size_kb": 0, 00:07:49.753 "state": "configuring", 00:07:49.753 "raid_level": "raid1", 00:07:49.753 "superblock": true, 00:07:49.753 "num_base_bdevs": 2, 00:07:49.753 "num_base_bdevs_discovered": 1, 00:07:49.753 "num_base_bdevs_operational": 2, 00:07:49.753 "base_bdevs_list": [ 00:07:49.753 { 00:07:49.753 "name": "BaseBdev1", 00:07:49.753 "uuid": "a7fb6da9-601a-43bc-bd86-61681373c575", 00:07:49.753 "is_configured": true, 00:07:49.753 "data_offset": 2048, 00:07:49.753 "data_size": 63488 00:07:49.753 }, 00:07:49.753 { 00:07:49.753 "name": "BaseBdev2", 00:07:49.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.753 "is_configured": false, 00:07:49.753 "data_offset": 0, 00:07:49.753 "data_size": 0 00:07:49.753 } 00:07:49.753 ] 00:07:49.753 }' 00:07:49.753 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.753 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.325 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:50.325 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.325 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.325 [2024-11-19 10:19:03.896246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:50.325 [2024-11-19 10:19:03.896566] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:50.325 [2024-11-19 10:19:03.896618] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:50.325 [2024-11-19 10:19:03.896895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:50.325 [2024-11-19 10:19:03.897117] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:50.325 [2024-11-19 10:19:03.897167] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:50.325 BaseBdev2 00:07:50.325 [2024-11-19 10:19:03.897371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.325 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.325 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:50.325 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:50.325 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:50.325 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:50.325 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:50.325 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:50.325 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:50.325 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.325 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.325 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.325 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:50.325 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.325 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.325 [ 00:07:50.325 { 00:07:50.325 "name": "BaseBdev2", 00:07:50.325 "aliases": [ 00:07:50.325 "a9fb4122-bd73-459d-afc7-afdc40641a3d" 00:07:50.325 ], 00:07:50.325 "product_name": "Malloc disk", 00:07:50.325 "block_size": 512, 00:07:50.325 "num_blocks": 65536, 00:07:50.325 "uuid": "a9fb4122-bd73-459d-afc7-afdc40641a3d", 00:07:50.325 "assigned_rate_limits": { 00:07:50.325 "rw_ios_per_sec": 0, 00:07:50.325 "rw_mbytes_per_sec": 0, 00:07:50.325 "r_mbytes_per_sec": 0, 00:07:50.325 "w_mbytes_per_sec": 0 00:07:50.325 }, 00:07:50.325 "claimed": true, 00:07:50.325 "claim_type": "exclusive_write", 00:07:50.325 "zoned": false, 00:07:50.325 "supported_io_types": { 00:07:50.325 "read": true, 00:07:50.325 "write": true, 00:07:50.325 "unmap": true, 00:07:50.325 "flush": true, 00:07:50.325 "reset": true, 00:07:50.325 "nvme_admin": false, 00:07:50.325 "nvme_io": false, 00:07:50.325 "nvme_io_md": false, 00:07:50.325 "write_zeroes": true, 00:07:50.325 "zcopy": true, 00:07:50.325 "get_zone_info": false, 00:07:50.325 "zone_management": false, 00:07:50.325 "zone_append": false, 00:07:50.325 "compare": false, 00:07:50.325 "compare_and_write": false, 00:07:50.325 "abort": true, 00:07:50.325 "seek_hole": false, 00:07:50.325 "seek_data": false, 00:07:50.325 "copy": true, 00:07:50.325 "nvme_iov_md": false 00:07:50.325 }, 00:07:50.325 "memory_domains": [ 00:07:50.325 { 00:07:50.325 "dma_device_id": "system", 00:07:50.325 "dma_device_type": 1 00:07:50.325 }, 00:07:50.325 { 00:07:50.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.325 "dma_device_type": 2 00:07:50.325 } 00:07:50.326 ], 00:07:50.326 "driver_specific": {} 00:07:50.326 } 00:07:50.326 ] 00:07:50.326 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.326 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:50.326 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:50.326 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:50.326 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:50.326 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.326 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:50.326 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:50.326 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:50.326 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.326 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.326 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.326 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.326 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.326 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.326 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.326 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.326 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.326 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.326 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.326 "name": "Existed_Raid", 00:07:50.326 "uuid": "9a613694-f9eb-4944-890d-d76a8a6d1c7f", 00:07:50.326 "strip_size_kb": 0, 00:07:50.326 "state": "online", 00:07:50.326 "raid_level": "raid1", 00:07:50.326 "superblock": true, 00:07:50.326 "num_base_bdevs": 2, 00:07:50.326 "num_base_bdevs_discovered": 2, 00:07:50.326 "num_base_bdevs_operational": 2, 00:07:50.326 "base_bdevs_list": [ 00:07:50.326 { 00:07:50.326 "name": "BaseBdev1", 00:07:50.326 "uuid": "a7fb6da9-601a-43bc-bd86-61681373c575", 00:07:50.326 "is_configured": true, 00:07:50.326 "data_offset": 2048, 00:07:50.326 "data_size": 63488 00:07:50.326 }, 00:07:50.326 { 00:07:50.326 "name": "BaseBdev2", 00:07:50.326 "uuid": "a9fb4122-bd73-459d-afc7-afdc40641a3d", 00:07:50.326 "is_configured": true, 00:07:50.326 "data_offset": 2048, 00:07:50.326 "data_size": 63488 00:07:50.326 } 00:07:50.326 ] 00:07:50.326 }' 00:07:50.326 10:19:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.326 10:19:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.896 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:50.896 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:50.896 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:50.896 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:50.896 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:50.896 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:50.896 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:50.896 10:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.896 10:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.896 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:50.896 [2024-11-19 10:19:04.391734] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:50.896 10:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.896 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:50.896 "name": "Existed_Raid", 00:07:50.896 "aliases": [ 00:07:50.896 "9a613694-f9eb-4944-890d-d76a8a6d1c7f" 00:07:50.896 ], 00:07:50.896 "product_name": "Raid Volume", 00:07:50.896 "block_size": 512, 00:07:50.896 "num_blocks": 63488, 00:07:50.896 "uuid": "9a613694-f9eb-4944-890d-d76a8a6d1c7f", 00:07:50.896 "assigned_rate_limits": { 00:07:50.896 "rw_ios_per_sec": 0, 00:07:50.896 "rw_mbytes_per_sec": 0, 00:07:50.896 "r_mbytes_per_sec": 0, 00:07:50.896 "w_mbytes_per_sec": 0 00:07:50.896 }, 00:07:50.896 "claimed": false, 00:07:50.896 "zoned": false, 00:07:50.896 "supported_io_types": { 00:07:50.896 "read": true, 00:07:50.896 "write": true, 00:07:50.896 "unmap": false, 00:07:50.896 "flush": false, 00:07:50.896 "reset": true, 00:07:50.896 "nvme_admin": false, 00:07:50.896 "nvme_io": false, 00:07:50.896 "nvme_io_md": false, 00:07:50.896 "write_zeroes": true, 00:07:50.896 "zcopy": false, 00:07:50.896 "get_zone_info": false, 00:07:50.896 "zone_management": false, 00:07:50.896 "zone_append": false, 00:07:50.896 "compare": false, 00:07:50.896 "compare_and_write": false, 00:07:50.896 "abort": false, 00:07:50.896 "seek_hole": false, 00:07:50.896 "seek_data": false, 00:07:50.896 "copy": false, 00:07:50.896 "nvme_iov_md": false 00:07:50.896 }, 00:07:50.896 "memory_domains": [ 00:07:50.896 { 00:07:50.896 "dma_device_id": "system", 00:07:50.896 "dma_device_type": 1 00:07:50.896 }, 00:07:50.896 { 00:07:50.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.896 "dma_device_type": 2 00:07:50.896 }, 00:07:50.896 { 00:07:50.896 "dma_device_id": "system", 00:07:50.896 "dma_device_type": 1 00:07:50.896 }, 00:07:50.896 { 00:07:50.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.896 "dma_device_type": 2 00:07:50.896 } 00:07:50.896 ], 00:07:50.896 "driver_specific": { 00:07:50.896 "raid": { 00:07:50.896 "uuid": "9a613694-f9eb-4944-890d-d76a8a6d1c7f", 00:07:50.896 "strip_size_kb": 0, 00:07:50.896 "state": "online", 00:07:50.896 "raid_level": "raid1", 00:07:50.896 "superblock": true, 00:07:50.896 "num_base_bdevs": 2, 00:07:50.896 "num_base_bdevs_discovered": 2, 00:07:50.896 "num_base_bdevs_operational": 2, 00:07:50.896 "base_bdevs_list": [ 00:07:50.896 { 00:07:50.896 "name": "BaseBdev1", 00:07:50.896 "uuid": "a7fb6da9-601a-43bc-bd86-61681373c575", 00:07:50.896 "is_configured": true, 00:07:50.896 "data_offset": 2048, 00:07:50.896 "data_size": 63488 00:07:50.896 }, 00:07:50.896 { 00:07:50.896 "name": "BaseBdev2", 00:07:50.896 "uuid": "a9fb4122-bd73-459d-afc7-afdc40641a3d", 00:07:50.896 "is_configured": true, 00:07:50.896 "data_offset": 2048, 00:07:50.896 "data_size": 63488 00:07:50.896 } 00:07:50.896 ] 00:07:50.896 } 00:07:50.896 } 00:07:50.896 }' 00:07:50.897 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:50.897 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:50.897 BaseBdev2' 00:07:50.897 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.897 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:50.897 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.897 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:50.897 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.897 10:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.897 10:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.897 10:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.897 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.897 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.897 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.897 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:50.897 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.897 10:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.897 10:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.897 10:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.897 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.897 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.897 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:50.897 10:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.897 10:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:50.897 [2024-11-19 10:19:04.639086] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:51.157 10:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.157 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:51.157 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:51.157 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:51.157 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:51.157 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:51.157 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:51.157 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.157 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.157 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:51.157 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:51.157 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:51.157 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.157 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.157 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.157 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.157 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.157 10:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.157 10:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.157 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.157 10:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.157 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.157 "name": "Existed_Raid", 00:07:51.157 "uuid": "9a613694-f9eb-4944-890d-d76a8a6d1c7f", 00:07:51.157 "strip_size_kb": 0, 00:07:51.157 "state": "online", 00:07:51.157 "raid_level": "raid1", 00:07:51.157 "superblock": true, 00:07:51.157 "num_base_bdevs": 2, 00:07:51.157 "num_base_bdevs_discovered": 1, 00:07:51.157 "num_base_bdevs_operational": 1, 00:07:51.157 "base_bdevs_list": [ 00:07:51.157 { 00:07:51.157 "name": null, 00:07:51.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.157 "is_configured": false, 00:07:51.157 "data_offset": 0, 00:07:51.157 "data_size": 63488 00:07:51.157 }, 00:07:51.157 { 00:07:51.157 "name": "BaseBdev2", 00:07:51.157 "uuid": "a9fb4122-bd73-459d-afc7-afdc40641a3d", 00:07:51.157 "is_configured": true, 00:07:51.157 "data_offset": 2048, 00:07:51.157 "data_size": 63488 00:07:51.157 } 00:07:51.157 ] 00:07:51.157 }' 00:07:51.157 10:19:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.157 10:19:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.417 10:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:51.417 10:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:51.417 10:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:51.417 10:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.417 10:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.417 10:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.417 10:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.417 10:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:51.417 10:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:51.417 10:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:51.417 10:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.417 10:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.417 [2024-11-19 10:19:05.167814] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:51.417 [2024-11-19 10:19:05.167972] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:51.677 [2024-11-19 10:19:05.257265] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:51.677 [2024-11-19 10:19:05.257388] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:51.677 [2024-11-19 10:19:05.257431] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:51.677 10:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.677 10:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:51.677 10:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:51.677 10:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.678 10:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.678 10:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:51.678 10:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.678 10:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.678 10:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:51.678 10:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:51.678 10:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:51.678 10:19:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62798 00:07:51.678 10:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62798 ']' 00:07:51.678 10:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62798 00:07:51.678 10:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:51.678 10:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.678 10:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62798 00:07:51.678 killing process with pid 62798 00:07:51.678 10:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.678 10:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.678 10:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62798' 00:07:51.678 10:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62798 00:07:51.678 [2024-11-19 10:19:05.340039] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:51.678 10:19:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62798 00:07:51.678 [2024-11-19 10:19:05.356071] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:52.617 ************************************ 00:07:52.617 END TEST raid_state_function_test_sb 00:07:52.617 ************************************ 00:07:52.617 10:19:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:52.617 00:07:52.617 real 0m4.836s 00:07:52.617 user 0m7.008s 00:07:52.617 sys 0m0.758s 00:07:52.617 10:19:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.617 10:19:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.877 10:19:06 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:52.877 10:19:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:52.877 10:19:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.877 10:19:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:52.877 ************************************ 00:07:52.877 START TEST raid_superblock_test 00:07:52.877 ************************************ 00:07:52.877 10:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:07:52.877 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:52.877 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:52.877 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:52.877 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:52.877 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:52.877 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:52.877 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:52.877 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:52.877 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:52.877 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:52.877 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:52.877 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:52.877 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:52.877 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:52.877 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:52.877 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63046 00:07:52.877 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:52.877 10:19:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63046 00:07:52.877 10:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63046 ']' 00:07:52.877 10:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.877 10:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.877 10:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.877 10:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.877 10:19:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.877 [2024-11-19 10:19:06.554336] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:52.877 [2024-11-19 10:19:06.554519] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63046 ] 00:07:53.136 [2024-11-19 10:19:06.712756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.136 [2024-11-19 10:19:06.816416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.395 [2024-11-19 10:19:07.006719] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.395 [2024-11-19 10:19:07.006785] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.654 10:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.654 10:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:53.654 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:53.654 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:53.654 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:53.654 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:53.654 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:53.654 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:53.654 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:53.654 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:53.654 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:53.654 10:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.654 10:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.654 malloc1 00:07:53.654 10:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.654 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:53.654 10:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.654 10:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.654 [2024-11-19 10:19:07.425306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:53.654 [2024-11-19 10:19:07.425433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.654 [2024-11-19 10:19:07.425474] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:53.654 [2024-11-19 10:19:07.425502] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.654 [2024-11-19 10:19:07.427692] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.654 [2024-11-19 10:19:07.427768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:53.654 pt1 00:07:53.654 10:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.654 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:53.654 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:53.654 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:53.654 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:53.654 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:53.654 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:53.654 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:53.654 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:53.914 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:53.914 10:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.914 10:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.914 malloc2 00:07:53.915 10:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.915 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:53.915 10:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.915 10:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.915 [2024-11-19 10:19:07.484208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:53.915 [2024-11-19 10:19:07.484324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.915 [2024-11-19 10:19:07.484373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:53.915 [2024-11-19 10:19:07.484405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.915 [2024-11-19 10:19:07.486657] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.915 [2024-11-19 10:19:07.486738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:53.915 pt2 00:07:53.915 10:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.915 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:53.915 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:53.915 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:53.915 10:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.915 10:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.915 [2024-11-19 10:19:07.496235] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:53.915 [2024-11-19 10:19:07.497947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:53.915 [2024-11-19 10:19:07.498112] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:53.915 [2024-11-19 10:19:07.498130] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:53.915 [2024-11-19 10:19:07.498367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:53.915 [2024-11-19 10:19:07.498523] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:53.915 [2024-11-19 10:19:07.498538] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:53.915 [2024-11-19 10:19:07.498678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.915 10:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.915 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:53.915 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:53.915 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:53.915 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:53.915 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:53.915 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:53.915 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.915 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.915 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.915 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.915 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.915 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:53.915 10:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.915 10:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.915 10:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.915 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.915 "name": "raid_bdev1", 00:07:53.915 "uuid": "e9106b7a-deac-4567-b213-0f731d968eb0", 00:07:53.915 "strip_size_kb": 0, 00:07:53.915 "state": "online", 00:07:53.915 "raid_level": "raid1", 00:07:53.915 "superblock": true, 00:07:53.915 "num_base_bdevs": 2, 00:07:53.915 "num_base_bdevs_discovered": 2, 00:07:53.915 "num_base_bdevs_operational": 2, 00:07:53.915 "base_bdevs_list": [ 00:07:53.915 { 00:07:53.915 "name": "pt1", 00:07:53.915 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:53.915 "is_configured": true, 00:07:53.915 "data_offset": 2048, 00:07:53.915 "data_size": 63488 00:07:53.915 }, 00:07:53.915 { 00:07:53.915 "name": "pt2", 00:07:53.915 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:53.915 "is_configured": true, 00:07:53.915 "data_offset": 2048, 00:07:53.915 "data_size": 63488 00:07:53.915 } 00:07:53.915 ] 00:07:53.915 }' 00:07:53.915 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.915 10:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.485 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:54.485 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:54.485 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:54.485 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:54.485 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:54.485 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:54.485 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:54.485 10:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.485 10:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.485 10:19:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:54.485 [2024-11-19 10:19:07.963882] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:54.485 10:19:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.485 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:54.485 "name": "raid_bdev1", 00:07:54.485 "aliases": [ 00:07:54.485 "e9106b7a-deac-4567-b213-0f731d968eb0" 00:07:54.485 ], 00:07:54.485 "product_name": "Raid Volume", 00:07:54.485 "block_size": 512, 00:07:54.485 "num_blocks": 63488, 00:07:54.485 "uuid": "e9106b7a-deac-4567-b213-0f731d968eb0", 00:07:54.485 "assigned_rate_limits": { 00:07:54.485 "rw_ios_per_sec": 0, 00:07:54.485 "rw_mbytes_per_sec": 0, 00:07:54.485 "r_mbytes_per_sec": 0, 00:07:54.485 "w_mbytes_per_sec": 0 00:07:54.485 }, 00:07:54.485 "claimed": false, 00:07:54.485 "zoned": false, 00:07:54.485 "supported_io_types": { 00:07:54.485 "read": true, 00:07:54.485 "write": true, 00:07:54.485 "unmap": false, 00:07:54.485 "flush": false, 00:07:54.485 "reset": true, 00:07:54.485 "nvme_admin": false, 00:07:54.485 "nvme_io": false, 00:07:54.485 "nvme_io_md": false, 00:07:54.485 "write_zeroes": true, 00:07:54.485 "zcopy": false, 00:07:54.485 "get_zone_info": false, 00:07:54.485 "zone_management": false, 00:07:54.485 "zone_append": false, 00:07:54.485 "compare": false, 00:07:54.485 "compare_and_write": false, 00:07:54.485 "abort": false, 00:07:54.485 "seek_hole": false, 00:07:54.485 "seek_data": false, 00:07:54.485 "copy": false, 00:07:54.485 "nvme_iov_md": false 00:07:54.485 }, 00:07:54.485 "memory_domains": [ 00:07:54.485 { 00:07:54.485 "dma_device_id": "system", 00:07:54.485 "dma_device_type": 1 00:07:54.485 }, 00:07:54.485 { 00:07:54.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.485 "dma_device_type": 2 00:07:54.485 }, 00:07:54.485 { 00:07:54.485 "dma_device_id": "system", 00:07:54.485 "dma_device_type": 1 00:07:54.485 }, 00:07:54.485 { 00:07:54.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.485 "dma_device_type": 2 00:07:54.485 } 00:07:54.485 ], 00:07:54.485 "driver_specific": { 00:07:54.485 "raid": { 00:07:54.485 "uuid": "e9106b7a-deac-4567-b213-0f731d968eb0", 00:07:54.485 "strip_size_kb": 0, 00:07:54.485 "state": "online", 00:07:54.485 "raid_level": "raid1", 00:07:54.485 "superblock": true, 00:07:54.485 "num_base_bdevs": 2, 00:07:54.485 "num_base_bdevs_discovered": 2, 00:07:54.485 "num_base_bdevs_operational": 2, 00:07:54.485 "base_bdevs_list": [ 00:07:54.485 { 00:07:54.485 "name": "pt1", 00:07:54.485 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:54.485 "is_configured": true, 00:07:54.485 "data_offset": 2048, 00:07:54.485 "data_size": 63488 00:07:54.485 }, 00:07:54.485 { 00:07:54.485 "name": "pt2", 00:07:54.485 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:54.485 "is_configured": true, 00:07:54.485 "data_offset": 2048, 00:07:54.485 "data_size": 63488 00:07:54.485 } 00:07:54.485 ] 00:07:54.485 } 00:07:54.485 } 00:07:54.485 }' 00:07:54.485 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:54.485 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:54.485 pt2' 00:07:54.485 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.485 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:54.485 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.485 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:54.485 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.485 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.485 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.485 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.485 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.485 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.485 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:54.486 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:54.486 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:54.486 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.486 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.486 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.486 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:54.486 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:54.486 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:54.486 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.486 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.486 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:54.486 [2024-11-19 10:19:08.199431] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:54.486 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.486 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e9106b7a-deac-4567-b213-0f731d968eb0 00:07:54.486 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e9106b7a-deac-4567-b213-0f731d968eb0 ']' 00:07:54.486 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:54.486 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.486 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.486 [2024-11-19 10:19:08.247092] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:54.486 [2024-11-19 10:19:08.247119] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:54.486 [2024-11-19 10:19:08.247211] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:54.486 [2024-11-19 10:19:08.247274] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:54.486 [2024-11-19 10:19:08.247290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:54.486 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.486 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.486 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:54.486 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.486 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.746 [2024-11-19 10:19:08.382856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:54.746 [2024-11-19 10:19:08.384728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:54.746 [2024-11-19 10:19:08.384853] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:54.746 [2024-11-19 10:19:08.384962] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:54.746 [2024-11-19 10:19:08.385035] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:54.746 [2024-11-19 10:19:08.385071] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:54.746 request: 00:07:54.746 { 00:07:54.746 "name": "raid_bdev1", 00:07:54.746 "raid_level": "raid1", 00:07:54.746 "base_bdevs": [ 00:07:54.746 "malloc1", 00:07:54.746 "malloc2" 00:07:54.746 ], 00:07:54.746 "superblock": false, 00:07:54.746 "method": "bdev_raid_create", 00:07:54.746 "req_id": 1 00:07:54.746 } 00:07:54.746 Got JSON-RPC error response 00:07:54.746 response: 00:07:54.746 { 00:07:54.746 "code": -17, 00:07:54.746 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:54.746 } 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:54.746 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:54.747 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.747 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.747 [2024-11-19 10:19:08.446727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:54.747 [2024-11-19 10:19:08.446783] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.747 [2024-11-19 10:19:08.446801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:54.747 [2024-11-19 10:19:08.446814] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.747 [2024-11-19 10:19:08.448972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.747 [2024-11-19 10:19:08.449027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:54.747 [2024-11-19 10:19:08.449104] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:54.747 [2024-11-19 10:19:08.449171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:54.747 pt1 00:07:54.747 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.747 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:54.747 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:54.747 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:54.747 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:54.747 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:54.747 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.747 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.747 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.747 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.747 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.747 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.747 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:54.747 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.747 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.747 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.747 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.747 "name": "raid_bdev1", 00:07:54.747 "uuid": "e9106b7a-deac-4567-b213-0f731d968eb0", 00:07:54.747 "strip_size_kb": 0, 00:07:54.747 "state": "configuring", 00:07:54.747 "raid_level": "raid1", 00:07:54.747 "superblock": true, 00:07:54.747 "num_base_bdevs": 2, 00:07:54.747 "num_base_bdevs_discovered": 1, 00:07:54.747 "num_base_bdevs_operational": 2, 00:07:54.747 "base_bdevs_list": [ 00:07:54.747 { 00:07:54.747 "name": "pt1", 00:07:54.747 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:54.747 "is_configured": true, 00:07:54.747 "data_offset": 2048, 00:07:54.747 "data_size": 63488 00:07:54.747 }, 00:07:54.747 { 00:07:54.747 "name": null, 00:07:54.747 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:54.747 "is_configured": false, 00:07:54.747 "data_offset": 2048, 00:07:54.747 "data_size": 63488 00:07:54.747 } 00:07:54.747 ] 00:07:54.747 }' 00:07:54.747 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.747 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.316 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:55.316 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:55.316 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:55.316 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:55.316 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.316 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.316 [2024-11-19 10:19:08.866042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:55.316 [2024-11-19 10:19:08.866160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:55.316 [2024-11-19 10:19:08.866202] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:55.316 [2024-11-19 10:19:08.866241] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:55.316 [2024-11-19 10:19:08.866713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:55.316 [2024-11-19 10:19:08.866791] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:55.316 [2024-11-19 10:19:08.866908] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:55.316 [2024-11-19 10:19:08.866966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:55.316 [2024-11-19 10:19:08.867137] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:55.316 [2024-11-19 10:19:08.867187] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:55.316 [2024-11-19 10:19:08.867448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:55.316 [2024-11-19 10:19:08.867654] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:55.316 [2024-11-19 10:19:08.867702] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:55.316 [2024-11-19 10:19:08.867902] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.316 pt2 00:07:55.316 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.316 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:55.316 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:55.316 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:55.316 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.316 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.316 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:55.316 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:55.317 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.317 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.317 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.317 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.317 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.317 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.317 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.317 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.317 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.317 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.317 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.317 "name": "raid_bdev1", 00:07:55.317 "uuid": "e9106b7a-deac-4567-b213-0f731d968eb0", 00:07:55.317 "strip_size_kb": 0, 00:07:55.317 "state": "online", 00:07:55.317 "raid_level": "raid1", 00:07:55.317 "superblock": true, 00:07:55.317 "num_base_bdevs": 2, 00:07:55.317 "num_base_bdevs_discovered": 2, 00:07:55.317 "num_base_bdevs_operational": 2, 00:07:55.317 "base_bdevs_list": [ 00:07:55.317 { 00:07:55.317 "name": "pt1", 00:07:55.317 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.317 "is_configured": true, 00:07:55.317 "data_offset": 2048, 00:07:55.317 "data_size": 63488 00:07:55.317 }, 00:07:55.317 { 00:07:55.317 "name": "pt2", 00:07:55.317 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.317 "is_configured": true, 00:07:55.317 "data_offset": 2048, 00:07:55.317 "data_size": 63488 00:07:55.317 } 00:07:55.317 ] 00:07:55.317 }' 00:07:55.317 10:19:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.317 10:19:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.579 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:55.579 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:55.579 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:55.579 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:55.579 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:55.579 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:55.579 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:55.579 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:55.579 10:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.579 10:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.580 [2024-11-19 10:19:09.301532] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:55.580 10:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.580 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:55.580 "name": "raid_bdev1", 00:07:55.580 "aliases": [ 00:07:55.580 "e9106b7a-deac-4567-b213-0f731d968eb0" 00:07:55.580 ], 00:07:55.580 "product_name": "Raid Volume", 00:07:55.580 "block_size": 512, 00:07:55.580 "num_blocks": 63488, 00:07:55.580 "uuid": "e9106b7a-deac-4567-b213-0f731d968eb0", 00:07:55.580 "assigned_rate_limits": { 00:07:55.580 "rw_ios_per_sec": 0, 00:07:55.580 "rw_mbytes_per_sec": 0, 00:07:55.580 "r_mbytes_per_sec": 0, 00:07:55.580 "w_mbytes_per_sec": 0 00:07:55.580 }, 00:07:55.580 "claimed": false, 00:07:55.580 "zoned": false, 00:07:55.580 "supported_io_types": { 00:07:55.580 "read": true, 00:07:55.580 "write": true, 00:07:55.580 "unmap": false, 00:07:55.580 "flush": false, 00:07:55.580 "reset": true, 00:07:55.580 "nvme_admin": false, 00:07:55.580 "nvme_io": false, 00:07:55.580 "nvme_io_md": false, 00:07:55.580 "write_zeroes": true, 00:07:55.580 "zcopy": false, 00:07:55.580 "get_zone_info": false, 00:07:55.580 "zone_management": false, 00:07:55.580 "zone_append": false, 00:07:55.580 "compare": false, 00:07:55.580 "compare_and_write": false, 00:07:55.580 "abort": false, 00:07:55.580 "seek_hole": false, 00:07:55.580 "seek_data": false, 00:07:55.580 "copy": false, 00:07:55.580 "nvme_iov_md": false 00:07:55.580 }, 00:07:55.580 "memory_domains": [ 00:07:55.580 { 00:07:55.580 "dma_device_id": "system", 00:07:55.580 "dma_device_type": 1 00:07:55.580 }, 00:07:55.580 { 00:07:55.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.580 "dma_device_type": 2 00:07:55.580 }, 00:07:55.580 { 00:07:55.580 "dma_device_id": "system", 00:07:55.580 "dma_device_type": 1 00:07:55.580 }, 00:07:55.580 { 00:07:55.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.580 "dma_device_type": 2 00:07:55.580 } 00:07:55.580 ], 00:07:55.580 "driver_specific": { 00:07:55.580 "raid": { 00:07:55.580 "uuid": "e9106b7a-deac-4567-b213-0f731d968eb0", 00:07:55.580 "strip_size_kb": 0, 00:07:55.580 "state": "online", 00:07:55.580 "raid_level": "raid1", 00:07:55.580 "superblock": true, 00:07:55.580 "num_base_bdevs": 2, 00:07:55.580 "num_base_bdevs_discovered": 2, 00:07:55.580 "num_base_bdevs_operational": 2, 00:07:55.580 "base_bdevs_list": [ 00:07:55.580 { 00:07:55.580 "name": "pt1", 00:07:55.580 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.580 "is_configured": true, 00:07:55.580 "data_offset": 2048, 00:07:55.580 "data_size": 63488 00:07:55.580 }, 00:07:55.580 { 00:07:55.581 "name": "pt2", 00:07:55.581 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.581 "is_configured": true, 00:07:55.581 "data_offset": 2048, 00:07:55.581 "data_size": 63488 00:07:55.581 } 00:07:55.581 ] 00:07:55.581 } 00:07:55.581 } 00:07:55.581 }' 00:07:55.581 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:55.842 pt2' 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:55.842 [2024-11-19 10:19:09.505180] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e9106b7a-deac-4567-b213-0f731d968eb0 '!=' e9106b7a-deac-4567-b213-0f731d968eb0 ']' 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.842 [2024-11-19 10:19:09.544894] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.842 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.842 "name": "raid_bdev1", 00:07:55.842 "uuid": "e9106b7a-deac-4567-b213-0f731d968eb0", 00:07:55.842 "strip_size_kb": 0, 00:07:55.842 "state": "online", 00:07:55.842 "raid_level": "raid1", 00:07:55.842 "superblock": true, 00:07:55.843 "num_base_bdevs": 2, 00:07:55.843 "num_base_bdevs_discovered": 1, 00:07:55.843 "num_base_bdevs_operational": 1, 00:07:55.843 "base_bdevs_list": [ 00:07:55.843 { 00:07:55.843 "name": null, 00:07:55.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.843 "is_configured": false, 00:07:55.843 "data_offset": 0, 00:07:55.843 "data_size": 63488 00:07:55.843 }, 00:07:55.843 { 00:07:55.843 "name": "pt2", 00:07:55.843 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.843 "is_configured": true, 00:07:55.843 "data_offset": 2048, 00:07:55.843 "data_size": 63488 00:07:55.843 } 00:07:55.843 ] 00:07:55.843 }' 00:07:55.843 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.843 10:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.439 10:19:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:56.439 10:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.439 10:19:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.439 [2024-11-19 10:19:10.000133] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:56.439 [2024-11-19 10:19:10.000225] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:56.439 [2024-11-19 10:19:10.000343] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.439 [2024-11-19 10:19:10.000436] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.439 [2024-11-19 10:19:10.000494] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:56.439 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.439 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:56.439 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.439 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.439 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.439 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.439 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:56.439 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:56.439 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:56.439 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:56.439 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:56.439 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.439 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.439 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.439 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:56.439 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:56.439 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:56.439 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:56.439 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:07:56.439 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:56.439 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.439 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.439 [2024-11-19 10:19:10.071972] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:56.439 [2024-11-19 10:19:10.072109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.439 [2024-11-19 10:19:10.072135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:56.439 [2024-11-19 10:19:10.072148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.439 [2024-11-19 10:19:10.074407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.439 [2024-11-19 10:19:10.074453] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:56.439 [2024-11-19 10:19:10.074546] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:56.439 [2024-11-19 10:19:10.074600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:56.439 [2024-11-19 10:19:10.074711] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:56.439 [2024-11-19 10:19:10.074724] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:56.439 [2024-11-19 10:19:10.074959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:56.439 [2024-11-19 10:19:10.075218] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:56.439 [2024-11-19 10:19:10.075232] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:07:56.439 [2024-11-19 10:19:10.075374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.439 pt2 00:07:56.439 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.440 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:56.440 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.440 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.440 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.440 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.440 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:56.440 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.440 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.440 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.440 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.440 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.440 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.440 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.440 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.440 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.440 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.440 "name": "raid_bdev1", 00:07:56.440 "uuid": "e9106b7a-deac-4567-b213-0f731d968eb0", 00:07:56.440 "strip_size_kb": 0, 00:07:56.440 "state": "online", 00:07:56.440 "raid_level": "raid1", 00:07:56.440 "superblock": true, 00:07:56.440 "num_base_bdevs": 2, 00:07:56.440 "num_base_bdevs_discovered": 1, 00:07:56.440 "num_base_bdevs_operational": 1, 00:07:56.440 "base_bdevs_list": [ 00:07:56.440 { 00:07:56.440 "name": null, 00:07:56.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.440 "is_configured": false, 00:07:56.440 "data_offset": 2048, 00:07:56.440 "data_size": 63488 00:07:56.440 }, 00:07:56.440 { 00:07:56.440 "name": "pt2", 00:07:56.440 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.440 "is_configured": true, 00:07:56.440 "data_offset": 2048, 00:07:56.440 "data_size": 63488 00:07:56.440 } 00:07:56.440 ] 00:07:56.440 }' 00:07:56.440 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.440 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.703 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:56.703 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.703 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.962 [2024-11-19 10:19:10.483234] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:56.962 [2024-11-19 10:19:10.483333] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:56.962 [2024-11-19 10:19:10.483456] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.962 [2024-11-19 10:19:10.483537] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.962 [2024-11-19 10:19:10.483602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:07:56.962 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.962 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:56.962 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.962 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.962 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.962 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.962 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:56.962 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:56.962 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:07:56.962 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:56.962 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.962 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.962 [2024-11-19 10:19:10.527202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:56.962 [2024-11-19 10:19:10.527315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.963 [2024-11-19 10:19:10.527358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:07:56.963 [2024-11-19 10:19:10.527406] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.963 [2024-11-19 10:19:10.529682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.963 [2024-11-19 10:19:10.529777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:56.963 [2024-11-19 10:19:10.529896] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:56.963 [2024-11-19 10:19:10.529985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:56.963 [2024-11-19 10:19:10.530176] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:56.963 [2024-11-19 10:19:10.530239] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:56.963 [2024-11-19 10:19:10.530285] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:07:56.963 [2024-11-19 10:19:10.530408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:56.963 [2024-11-19 10:19:10.530532] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:07:56.963 [2024-11-19 10:19:10.530576] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:56.963 [2024-11-19 10:19:10.530858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:56.963 [2024-11-19 10:19:10.531093] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:07:56.963 [2024-11-19 10:19:10.531151] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:07:56.963 [2024-11-19 10:19:10.531398] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.963 pt1 00:07:56.963 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.963 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:07:56.963 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:56.963 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.963 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.963 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.963 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.963 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:56.963 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.963 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.963 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.963 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.963 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.963 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.963 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.963 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.963 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.963 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.963 "name": "raid_bdev1", 00:07:56.963 "uuid": "e9106b7a-deac-4567-b213-0f731d968eb0", 00:07:56.963 "strip_size_kb": 0, 00:07:56.963 "state": "online", 00:07:56.963 "raid_level": "raid1", 00:07:56.963 "superblock": true, 00:07:56.963 "num_base_bdevs": 2, 00:07:56.963 "num_base_bdevs_discovered": 1, 00:07:56.963 "num_base_bdevs_operational": 1, 00:07:56.963 "base_bdevs_list": [ 00:07:56.963 { 00:07:56.963 "name": null, 00:07:56.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.963 "is_configured": false, 00:07:56.963 "data_offset": 2048, 00:07:56.963 "data_size": 63488 00:07:56.963 }, 00:07:56.963 { 00:07:56.963 "name": "pt2", 00:07:56.963 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:56.963 "is_configured": true, 00:07:56.963 "data_offset": 2048, 00:07:56.963 "data_size": 63488 00:07:56.963 } 00:07:56.963 ] 00:07:56.963 }' 00:07:56.963 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.963 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.223 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:07:57.223 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:57.223 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.223 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.223 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.223 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:07:57.223 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:57.223 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.223 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.223 10:19:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:07:57.223 [2024-11-19 10:19:10.986853] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:57.223 10:19:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.483 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' e9106b7a-deac-4567-b213-0f731d968eb0 '!=' e9106b7a-deac-4567-b213-0f731d968eb0 ']' 00:07:57.483 10:19:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63046 00:07:57.483 10:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63046 ']' 00:07:57.483 10:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63046 00:07:57.483 10:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:57.483 10:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:57.483 10:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63046 00:07:57.483 killing process with pid 63046 00:07:57.483 10:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:57.483 10:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:57.483 10:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63046' 00:07:57.483 10:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63046 00:07:57.483 [2024-11-19 10:19:11.054653] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:57.483 [2024-11-19 10:19:11.054748] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.483 [2024-11-19 10:19:11.054797] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:57.483 [2024-11-19 10:19:11.054812] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:07:57.483 10:19:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63046 00:07:57.483 [2024-11-19 10:19:11.254885] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:58.864 10:19:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:58.864 00:07:58.864 real 0m5.864s 00:07:58.864 user 0m8.897s 00:07:58.864 sys 0m0.940s 00:07:58.864 ************************************ 00:07:58.864 END TEST raid_superblock_test 00:07:58.864 ************************************ 00:07:58.864 10:19:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.864 10:19:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.864 10:19:12 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:07:58.864 10:19:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:58.864 10:19:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.864 10:19:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:58.864 ************************************ 00:07:58.864 START TEST raid_read_error_test 00:07:58.864 ************************************ 00:07:58.864 10:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:07:58.864 10:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:58.864 10:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:58.864 10:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:58.864 10:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:58.864 10:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:58.864 10:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:58.864 10:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:58.864 10:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:58.864 10:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:58.864 10:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:58.864 10:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:58.864 10:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:58.864 10:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:58.864 10:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:58.864 10:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:58.864 10:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:58.864 10:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:58.864 10:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:58.864 10:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:58.864 10:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:58.864 10:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:58.864 10:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fsvBHo2wKR 00:07:58.864 10:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63369 00:07:58.864 10:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:58.864 10:19:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63369 00:07:58.864 10:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63369 ']' 00:07:58.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.865 10:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.865 10:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.865 10:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.865 10:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.865 10:19:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.865 [2024-11-19 10:19:12.495634] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:07:58.865 [2024-11-19 10:19:12.495750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63369 ] 00:07:59.124 [2024-11-19 10:19:12.647830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.124 [2024-11-19 10:19:12.749550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.383 [2024-11-19 10:19:12.953299] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:59.383 [2024-11-19 10:19:12.953402] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:59.644 10:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:59.644 10:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:59.644 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:59.644 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:59.644 10:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.644 10:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.644 BaseBdev1_malloc 00:07:59.644 10:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.644 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:59.644 10:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.644 10:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.644 true 00:07:59.644 10:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.644 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:59.644 10:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.644 10:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.644 [2024-11-19 10:19:13.381901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:59.644 [2024-11-19 10:19:13.382039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.644 [2024-11-19 10:19:13.382082] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:59.644 [2024-11-19 10:19:13.382119] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.644 [2024-11-19 10:19:13.384163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.644 [2024-11-19 10:19:13.384253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:59.644 BaseBdev1 00:07:59.644 10:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.644 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:59.644 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:59.644 10:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.644 10:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.904 BaseBdev2_malloc 00:07:59.904 10:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.904 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:59.904 10:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.904 10:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.904 true 00:07:59.904 10:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.904 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:59.904 10:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.904 10:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.904 [2024-11-19 10:19:13.447732] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:59.904 [2024-11-19 10:19:13.447838] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.904 [2024-11-19 10:19:13.447877] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:59.904 [2024-11-19 10:19:13.447912] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.904 [2024-11-19 10:19:13.449975] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.904 [2024-11-19 10:19:13.450088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:59.904 BaseBdev2 00:07:59.904 10:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.904 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:59.904 10:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.904 10:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.904 [2024-11-19 10:19:13.459764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:59.904 [2024-11-19 10:19:13.461679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:59.904 [2024-11-19 10:19:13.461881] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:59.904 [2024-11-19 10:19:13.461898] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:59.904 [2024-11-19 10:19:13.462169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:59.904 [2024-11-19 10:19:13.462358] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:59.904 [2024-11-19 10:19:13.462378] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:59.904 [2024-11-19 10:19:13.462549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:59.904 10:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.904 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:59.904 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:59.904 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:59.904 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:59.904 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:59.904 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.904 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.904 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.904 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.904 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.904 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.904 10:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.904 10:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.904 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:59.904 10:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.904 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.904 "name": "raid_bdev1", 00:07:59.904 "uuid": "06d4dee4-7518-4092-9160-360c70dbe440", 00:07:59.904 "strip_size_kb": 0, 00:07:59.904 "state": "online", 00:07:59.904 "raid_level": "raid1", 00:07:59.904 "superblock": true, 00:07:59.904 "num_base_bdevs": 2, 00:07:59.904 "num_base_bdevs_discovered": 2, 00:07:59.904 "num_base_bdevs_operational": 2, 00:07:59.904 "base_bdevs_list": [ 00:07:59.904 { 00:07:59.904 "name": "BaseBdev1", 00:07:59.904 "uuid": "3d07c67d-89fc-5ee7-842d-ea1cb2260453", 00:07:59.904 "is_configured": true, 00:07:59.904 "data_offset": 2048, 00:07:59.904 "data_size": 63488 00:07:59.904 }, 00:07:59.904 { 00:07:59.904 "name": "BaseBdev2", 00:07:59.904 "uuid": "134e9c7f-e959-5e16-b8ce-8483a53d46b9", 00:07:59.904 "is_configured": true, 00:07:59.904 "data_offset": 2048, 00:07:59.904 "data_size": 63488 00:07:59.904 } 00:07:59.904 ] 00:07:59.904 }' 00:07:59.904 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.904 10:19:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.163 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:00.163 10:19:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:00.422 [2024-11-19 10:19:13.956159] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:01.360 10:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:01.360 10:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.360 10:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.360 10:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.360 10:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:01.360 10:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:01.360 10:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:01.360 10:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:01.360 10:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:01.360 10:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:01.360 10:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:01.360 10:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:01.360 10:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:01.360 10:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.360 10:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.360 10:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.360 10:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.360 10:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.360 10:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.360 10:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:01.360 10:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.360 10:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.360 10:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.360 10:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.360 "name": "raid_bdev1", 00:08:01.360 "uuid": "06d4dee4-7518-4092-9160-360c70dbe440", 00:08:01.360 "strip_size_kb": 0, 00:08:01.360 "state": "online", 00:08:01.360 "raid_level": "raid1", 00:08:01.360 "superblock": true, 00:08:01.360 "num_base_bdevs": 2, 00:08:01.360 "num_base_bdevs_discovered": 2, 00:08:01.360 "num_base_bdevs_operational": 2, 00:08:01.360 "base_bdevs_list": [ 00:08:01.360 { 00:08:01.360 "name": "BaseBdev1", 00:08:01.360 "uuid": "3d07c67d-89fc-5ee7-842d-ea1cb2260453", 00:08:01.360 "is_configured": true, 00:08:01.360 "data_offset": 2048, 00:08:01.360 "data_size": 63488 00:08:01.360 }, 00:08:01.360 { 00:08:01.360 "name": "BaseBdev2", 00:08:01.360 "uuid": "134e9c7f-e959-5e16-b8ce-8483a53d46b9", 00:08:01.360 "is_configured": true, 00:08:01.360 "data_offset": 2048, 00:08:01.360 "data_size": 63488 00:08:01.360 } 00:08:01.360 ] 00:08:01.360 }' 00:08:01.360 10:19:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.360 10:19:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.620 10:19:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:01.620 10:19:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.620 10:19:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.620 [2024-11-19 10:19:15.356343] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:01.620 [2024-11-19 10:19:15.356463] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:01.620 [2024-11-19 10:19:15.359106] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:01.620 [2024-11-19 10:19:15.359206] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.620 [2024-11-19 10:19:15.359334] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:01.620 [2024-11-19 10:19:15.359393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:01.620 { 00:08:01.620 "results": [ 00:08:01.620 { 00:08:01.620 "job": "raid_bdev1", 00:08:01.620 "core_mask": "0x1", 00:08:01.620 "workload": "randrw", 00:08:01.620 "percentage": 50, 00:08:01.620 "status": "finished", 00:08:01.620 "queue_depth": 1, 00:08:01.620 "io_size": 131072, 00:08:01.620 "runtime": 1.401109, 00:08:01.620 "iops": 17969.337146503236, 00:08:01.620 "mibps": 2246.1671433129045, 00:08:01.620 "io_failed": 0, 00:08:01.620 "io_timeout": 0, 00:08:01.620 "avg_latency_us": 52.86707418030562, 00:08:01.620 "min_latency_us": 23.923144104803495, 00:08:01.620 "max_latency_us": 1473.844541484716 00:08:01.620 } 00:08:01.620 ], 00:08:01.620 "core_count": 1 00:08:01.620 } 00:08:01.620 10:19:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.620 10:19:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63369 00:08:01.620 10:19:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63369 ']' 00:08:01.620 10:19:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63369 00:08:01.620 10:19:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:01.620 10:19:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:01.620 10:19:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63369 00:08:01.620 killing process with pid 63369 00:08:01.620 10:19:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:01.620 10:19:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:01.620 10:19:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63369' 00:08:01.620 10:19:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63369 00:08:01.620 [2024-11-19 10:19:15.392722] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:01.620 10:19:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63369 00:08:01.880 [2024-11-19 10:19:15.527235] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:03.287 10:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fsvBHo2wKR 00:08:03.287 10:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:03.287 10:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:03.287 10:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:03.287 10:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:03.287 10:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:03.287 10:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:03.287 10:19:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:03.287 00:08:03.287 real 0m4.288s 00:08:03.287 user 0m5.104s 00:08:03.287 sys 0m0.506s 00:08:03.287 10:19:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.287 10:19:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.287 ************************************ 00:08:03.287 END TEST raid_read_error_test 00:08:03.287 ************************************ 00:08:03.287 10:19:16 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:03.287 10:19:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:03.287 10:19:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.287 10:19:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:03.287 ************************************ 00:08:03.287 START TEST raid_write_error_test 00:08:03.287 ************************************ 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QgaAEYQW9h 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63515 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63515 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63515 ']' 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.287 10:19:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.287 [2024-11-19 10:19:16.850462] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:03.287 [2024-11-19 10:19:16.850660] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63515 ] 00:08:03.287 [2024-11-19 10:19:17.003503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.547 [2024-11-19 10:19:17.118533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.547 [2024-11-19 10:19:17.322567] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.547 [2024-11-19 10:19:17.322616] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.117 10:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.117 10:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:04.117 10:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:04.117 10:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:04.117 10:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.117 10:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.117 BaseBdev1_malloc 00:08:04.117 10:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.117 10:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:04.117 10:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.117 10:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.117 true 00:08:04.117 10:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.117 10:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:04.117 10:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.117 10:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.117 [2024-11-19 10:19:17.749934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:04.117 [2024-11-19 10:19:17.750065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.117 [2024-11-19 10:19:17.750093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:04.117 [2024-11-19 10:19:17.750107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.117 [2024-11-19 10:19:17.752272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.117 [2024-11-19 10:19:17.752331] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:04.117 BaseBdev1 00:08:04.117 10:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.117 10:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:04.117 10:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:04.117 10:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.117 10:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.117 BaseBdev2_malloc 00:08:04.117 10:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.117 10:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:04.117 10:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.117 10:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.117 true 00:08:04.117 10:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.117 10:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:04.117 10:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.117 10:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.117 [2024-11-19 10:19:17.816453] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:04.117 [2024-11-19 10:19:17.816523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:04.117 [2024-11-19 10:19:17.816543] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:04.117 [2024-11-19 10:19:17.816556] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:04.117 [2024-11-19 10:19:17.818646] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:04.117 [2024-11-19 10:19:17.818691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:04.117 BaseBdev2 00:08:04.117 10:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.117 10:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:04.117 10:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.118 10:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.118 [2024-11-19 10:19:17.828487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:04.118 [2024-11-19 10:19:17.830338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:04.118 [2024-11-19 10:19:17.830551] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:04.118 [2024-11-19 10:19:17.830568] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:04.118 [2024-11-19 10:19:17.830824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:04.118 [2024-11-19 10:19:17.831045] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:04.118 [2024-11-19 10:19:17.831058] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:04.118 [2024-11-19 10:19:17.831234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.118 10:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.118 10:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:04.118 10:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:04.118 10:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.118 10:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:04.118 10:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:04.118 10:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.118 10:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.118 10:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.118 10:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.118 10:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.118 10:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.118 10:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.118 10:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.118 10:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:04.118 10:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.118 10:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.118 "name": "raid_bdev1", 00:08:04.118 "uuid": "4412353a-3aef-41ce-bb46-230bf6fa282a", 00:08:04.118 "strip_size_kb": 0, 00:08:04.118 "state": "online", 00:08:04.118 "raid_level": "raid1", 00:08:04.118 "superblock": true, 00:08:04.118 "num_base_bdevs": 2, 00:08:04.118 "num_base_bdevs_discovered": 2, 00:08:04.118 "num_base_bdevs_operational": 2, 00:08:04.118 "base_bdevs_list": [ 00:08:04.118 { 00:08:04.118 "name": "BaseBdev1", 00:08:04.118 "uuid": "f5782d4f-6460-5e2e-bb06-e21a4bf8b098", 00:08:04.118 "is_configured": true, 00:08:04.118 "data_offset": 2048, 00:08:04.118 "data_size": 63488 00:08:04.118 }, 00:08:04.118 { 00:08:04.118 "name": "BaseBdev2", 00:08:04.118 "uuid": "ad2e725d-154a-52d6-a58c-0ac8d6511477", 00:08:04.118 "is_configured": true, 00:08:04.118 "data_offset": 2048, 00:08:04.118 "data_size": 63488 00:08:04.118 } 00:08:04.118 ] 00:08:04.118 }' 00:08:04.118 10:19:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.118 10:19:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.688 10:19:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:04.688 10:19:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:04.688 [2024-11-19 10:19:18.364903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:05.624 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:05.624 10:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.624 10:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.624 [2024-11-19 10:19:19.281323] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:05.624 [2024-11-19 10:19:19.281482] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:05.624 [2024-11-19 10:19:19.281713] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:05.624 10:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.624 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:05.624 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:05.624 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:05.624 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:05.624 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:05.624 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:05.624 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:05.624 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:05.624 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:05.624 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:05.624 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.624 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.624 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.624 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.624 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.624 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:05.624 10:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.624 10:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.624 10:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.624 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.624 "name": "raid_bdev1", 00:08:05.624 "uuid": "4412353a-3aef-41ce-bb46-230bf6fa282a", 00:08:05.624 "strip_size_kb": 0, 00:08:05.624 "state": "online", 00:08:05.624 "raid_level": "raid1", 00:08:05.624 "superblock": true, 00:08:05.624 "num_base_bdevs": 2, 00:08:05.624 "num_base_bdevs_discovered": 1, 00:08:05.624 "num_base_bdevs_operational": 1, 00:08:05.624 "base_bdevs_list": [ 00:08:05.624 { 00:08:05.624 "name": null, 00:08:05.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.624 "is_configured": false, 00:08:05.624 "data_offset": 0, 00:08:05.624 "data_size": 63488 00:08:05.624 }, 00:08:05.624 { 00:08:05.624 "name": "BaseBdev2", 00:08:05.624 "uuid": "ad2e725d-154a-52d6-a58c-0ac8d6511477", 00:08:05.624 "is_configured": true, 00:08:05.624 "data_offset": 2048, 00:08:05.624 "data_size": 63488 00:08:05.624 } 00:08:05.624 ] 00:08:05.624 }' 00:08:05.624 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.624 10:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.192 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:06.192 10:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.192 10:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.192 [2024-11-19 10:19:19.707432] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:06.192 [2024-11-19 10:19:19.707560] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:06.192 [2024-11-19 10:19:19.710190] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:06.192 [2024-11-19 10:19:19.710284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.192 [2024-11-19 10:19:19.710376] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:06.192 [2024-11-19 10:19:19.710446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:06.192 { 00:08:06.192 "results": [ 00:08:06.192 { 00:08:06.192 "job": "raid_bdev1", 00:08:06.192 "core_mask": "0x1", 00:08:06.192 "workload": "randrw", 00:08:06.192 "percentage": 50, 00:08:06.192 "status": "finished", 00:08:06.192 "queue_depth": 1, 00:08:06.192 "io_size": 131072, 00:08:06.192 "runtime": 1.343394, 00:08:06.192 "iops": 20102.069832082027, 00:08:06.192 "mibps": 2512.7587290102533, 00:08:06.192 "io_failed": 0, 00:08:06.192 "io_timeout": 0, 00:08:06.192 "avg_latency_us": 46.905475081842354, 00:08:06.192 "min_latency_us": 23.923144104803495, 00:08:06.192 "max_latency_us": 1438.071615720524 00:08:06.192 } 00:08:06.192 ], 00:08:06.192 "core_count": 1 00:08:06.192 } 00:08:06.192 10:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.192 10:19:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63515 00:08:06.192 10:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63515 ']' 00:08:06.192 10:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63515 00:08:06.192 10:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:06.192 10:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.192 10:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63515 00:08:06.192 killing process with pid 63515 00:08:06.192 10:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:06.192 10:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:06.192 10:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63515' 00:08:06.192 10:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63515 00:08:06.192 [2024-11-19 10:19:19.754619] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:06.192 10:19:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63515 00:08:06.192 [2024-11-19 10:19:19.889319] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:07.572 10:19:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:07.572 10:19:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QgaAEYQW9h 00:08:07.572 10:19:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:07.572 10:19:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:07.572 10:19:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:07.572 ************************************ 00:08:07.572 END TEST raid_write_error_test 00:08:07.572 ************************************ 00:08:07.572 10:19:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:07.572 10:19:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:07.572 10:19:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:07.572 00:08:07.572 real 0m4.291s 00:08:07.572 user 0m5.115s 00:08:07.572 sys 0m0.531s 00:08:07.572 10:19:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.572 10:19:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.572 10:19:21 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:07.572 10:19:21 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:07.572 10:19:21 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:07.572 10:19:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:07.572 10:19:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.572 10:19:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:07.572 ************************************ 00:08:07.572 START TEST raid_state_function_test 00:08:07.572 ************************************ 00:08:07.572 10:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:07.572 10:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:07.572 10:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:07.572 10:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:07.572 10:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:07.572 10:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:07.572 10:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:07.572 10:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:07.572 10:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:07.572 10:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:07.572 10:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:07.572 10:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:07.572 10:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:07.573 10:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:07.573 10:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:07.573 10:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:07.573 10:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:07.573 10:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:07.573 10:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:07.573 10:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:07.573 10:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:07.573 10:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:07.573 10:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:07.573 10:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:07.573 10:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:07.573 10:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:07.573 10:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:07.573 10:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63653 00:08:07.573 10:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:07.573 10:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63653' 00:08:07.573 Process raid pid: 63653 00:08:07.573 10:19:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63653 00:08:07.573 10:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63653 ']' 00:08:07.573 10:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.573 10:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.573 10:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.573 10:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.573 10:19:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.573 [2024-11-19 10:19:21.203736] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:07.573 [2024-11-19 10:19:21.203851] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.832 [2024-11-19 10:19:21.379373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.832 [2024-11-19 10:19:21.492073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.091 [2024-11-19 10:19:21.701299] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.091 [2024-11-19 10:19:21.701352] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.350 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:08.350 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:08.350 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:08.350 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.350 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.350 [2024-11-19 10:19:22.055371] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:08.350 [2024-11-19 10:19:22.055438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:08.350 [2024-11-19 10:19:22.055467] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.350 [2024-11-19 10:19:22.055480] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.350 [2024-11-19 10:19:22.055489] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:08.350 [2024-11-19 10:19:22.055501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:08.350 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.350 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:08.350 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.350 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.350 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.350 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.350 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:08.350 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.350 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.350 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.350 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.350 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.350 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.350 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.350 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.350 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.350 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.350 "name": "Existed_Raid", 00:08:08.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.350 "strip_size_kb": 64, 00:08:08.350 "state": "configuring", 00:08:08.350 "raid_level": "raid0", 00:08:08.350 "superblock": false, 00:08:08.350 "num_base_bdevs": 3, 00:08:08.350 "num_base_bdevs_discovered": 0, 00:08:08.350 "num_base_bdevs_operational": 3, 00:08:08.350 "base_bdevs_list": [ 00:08:08.350 { 00:08:08.350 "name": "BaseBdev1", 00:08:08.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.350 "is_configured": false, 00:08:08.350 "data_offset": 0, 00:08:08.350 "data_size": 0 00:08:08.350 }, 00:08:08.350 { 00:08:08.350 "name": "BaseBdev2", 00:08:08.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.350 "is_configured": false, 00:08:08.350 "data_offset": 0, 00:08:08.350 "data_size": 0 00:08:08.350 }, 00:08:08.350 { 00:08:08.350 "name": "BaseBdev3", 00:08:08.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.350 "is_configured": false, 00:08:08.350 "data_offset": 0, 00:08:08.350 "data_size": 0 00:08:08.350 } 00:08:08.350 ] 00:08:08.350 }' 00:08:08.350 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.350 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.920 [2024-11-19 10:19:22.510654] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:08.920 [2024-11-19 10:19:22.510771] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.920 [2024-11-19 10:19:22.522624] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:08.920 [2024-11-19 10:19:22.522726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:08.920 [2024-11-19 10:19:22.522782] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:08.920 [2024-11-19 10:19:22.522813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:08.920 [2024-11-19 10:19:22.522855] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:08.920 [2024-11-19 10:19:22.522884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.920 [2024-11-19 10:19:22.570036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:08.920 BaseBdev1 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.920 [ 00:08:08.920 { 00:08:08.920 "name": "BaseBdev1", 00:08:08.920 "aliases": [ 00:08:08.920 "b35b3687-f4af-45e3-ba69-8a5153b33f2e" 00:08:08.920 ], 00:08:08.920 "product_name": "Malloc disk", 00:08:08.920 "block_size": 512, 00:08:08.920 "num_blocks": 65536, 00:08:08.920 "uuid": "b35b3687-f4af-45e3-ba69-8a5153b33f2e", 00:08:08.920 "assigned_rate_limits": { 00:08:08.920 "rw_ios_per_sec": 0, 00:08:08.920 "rw_mbytes_per_sec": 0, 00:08:08.920 "r_mbytes_per_sec": 0, 00:08:08.920 "w_mbytes_per_sec": 0 00:08:08.920 }, 00:08:08.920 "claimed": true, 00:08:08.920 "claim_type": "exclusive_write", 00:08:08.920 "zoned": false, 00:08:08.920 "supported_io_types": { 00:08:08.920 "read": true, 00:08:08.920 "write": true, 00:08:08.920 "unmap": true, 00:08:08.920 "flush": true, 00:08:08.920 "reset": true, 00:08:08.920 "nvme_admin": false, 00:08:08.920 "nvme_io": false, 00:08:08.920 "nvme_io_md": false, 00:08:08.920 "write_zeroes": true, 00:08:08.920 "zcopy": true, 00:08:08.920 "get_zone_info": false, 00:08:08.920 "zone_management": false, 00:08:08.920 "zone_append": false, 00:08:08.920 "compare": false, 00:08:08.920 "compare_and_write": false, 00:08:08.920 "abort": true, 00:08:08.920 "seek_hole": false, 00:08:08.920 "seek_data": false, 00:08:08.920 "copy": true, 00:08:08.920 "nvme_iov_md": false 00:08:08.920 }, 00:08:08.920 "memory_domains": [ 00:08:08.920 { 00:08:08.920 "dma_device_id": "system", 00:08:08.920 "dma_device_type": 1 00:08:08.920 }, 00:08:08.920 { 00:08:08.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.920 "dma_device_type": 2 00:08:08.920 } 00:08:08.920 ], 00:08:08.920 "driver_specific": {} 00:08:08.920 } 00:08:08.920 ] 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.920 "name": "Existed_Raid", 00:08:08.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.920 "strip_size_kb": 64, 00:08:08.920 "state": "configuring", 00:08:08.920 "raid_level": "raid0", 00:08:08.920 "superblock": false, 00:08:08.920 "num_base_bdevs": 3, 00:08:08.920 "num_base_bdevs_discovered": 1, 00:08:08.920 "num_base_bdevs_operational": 3, 00:08:08.920 "base_bdevs_list": [ 00:08:08.920 { 00:08:08.920 "name": "BaseBdev1", 00:08:08.920 "uuid": "b35b3687-f4af-45e3-ba69-8a5153b33f2e", 00:08:08.920 "is_configured": true, 00:08:08.920 "data_offset": 0, 00:08:08.920 "data_size": 65536 00:08:08.920 }, 00:08:08.920 { 00:08:08.920 "name": "BaseBdev2", 00:08:08.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.920 "is_configured": false, 00:08:08.920 "data_offset": 0, 00:08:08.920 "data_size": 0 00:08:08.920 }, 00:08:08.920 { 00:08:08.920 "name": "BaseBdev3", 00:08:08.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:08.920 "is_configured": false, 00:08:08.920 "data_offset": 0, 00:08:08.920 "data_size": 0 00:08:08.920 } 00:08:08.920 ] 00:08:08.920 }' 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.920 10:19:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.488 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:09.488 10:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.488 10:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.488 [2024-11-19 10:19:23.041282] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:09.488 [2024-11-19 10:19:23.041424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:09.488 10:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.488 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:09.488 10:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.488 10:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.488 [2024-11-19 10:19:23.053330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:09.488 [2024-11-19 10:19:23.055213] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:09.488 [2024-11-19 10:19:23.055266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:09.488 [2024-11-19 10:19:23.055278] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:09.488 [2024-11-19 10:19:23.055290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:09.488 10:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.488 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:09.488 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:09.488 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:09.488 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:09.488 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:09.488 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:09.488 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.488 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:09.488 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.488 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.488 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.488 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.488 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.488 10:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.488 10:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.488 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:09.488 10:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.488 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.488 "name": "Existed_Raid", 00:08:09.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.488 "strip_size_kb": 64, 00:08:09.488 "state": "configuring", 00:08:09.488 "raid_level": "raid0", 00:08:09.488 "superblock": false, 00:08:09.488 "num_base_bdevs": 3, 00:08:09.488 "num_base_bdevs_discovered": 1, 00:08:09.488 "num_base_bdevs_operational": 3, 00:08:09.488 "base_bdevs_list": [ 00:08:09.488 { 00:08:09.488 "name": "BaseBdev1", 00:08:09.488 "uuid": "b35b3687-f4af-45e3-ba69-8a5153b33f2e", 00:08:09.488 "is_configured": true, 00:08:09.488 "data_offset": 0, 00:08:09.488 "data_size": 65536 00:08:09.488 }, 00:08:09.488 { 00:08:09.488 "name": "BaseBdev2", 00:08:09.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.488 "is_configured": false, 00:08:09.488 "data_offset": 0, 00:08:09.488 "data_size": 0 00:08:09.488 }, 00:08:09.488 { 00:08:09.488 "name": "BaseBdev3", 00:08:09.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:09.488 "is_configured": false, 00:08:09.488 "data_offset": 0, 00:08:09.488 "data_size": 0 00:08:09.488 } 00:08:09.488 ] 00:08:09.488 }' 00:08:09.488 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.488 10:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.748 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:09.748 10:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.748 10:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.010 [2024-11-19 10:19:23.567889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:10.010 BaseBdev2 00:08:10.010 10:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.010 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:10.010 10:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:10.010 10:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:10.010 10:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:10.010 10:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:10.010 10:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:10.010 10:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:10.010 10:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.010 10:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.010 10:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.010 10:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:10.010 10:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.010 10:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.010 [ 00:08:10.010 { 00:08:10.010 "name": "BaseBdev2", 00:08:10.010 "aliases": [ 00:08:10.010 "969c2946-e871-4686-974c-7f91c8d9160b" 00:08:10.010 ], 00:08:10.010 "product_name": "Malloc disk", 00:08:10.010 "block_size": 512, 00:08:10.010 "num_blocks": 65536, 00:08:10.010 "uuid": "969c2946-e871-4686-974c-7f91c8d9160b", 00:08:10.010 "assigned_rate_limits": { 00:08:10.010 "rw_ios_per_sec": 0, 00:08:10.010 "rw_mbytes_per_sec": 0, 00:08:10.010 "r_mbytes_per_sec": 0, 00:08:10.010 "w_mbytes_per_sec": 0 00:08:10.010 }, 00:08:10.010 "claimed": true, 00:08:10.010 "claim_type": "exclusive_write", 00:08:10.010 "zoned": false, 00:08:10.010 "supported_io_types": { 00:08:10.010 "read": true, 00:08:10.010 "write": true, 00:08:10.010 "unmap": true, 00:08:10.010 "flush": true, 00:08:10.010 "reset": true, 00:08:10.010 "nvme_admin": false, 00:08:10.010 "nvme_io": false, 00:08:10.010 "nvme_io_md": false, 00:08:10.010 "write_zeroes": true, 00:08:10.010 "zcopy": true, 00:08:10.010 "get_zone_info": false, 00:08:10.010 "zone_management": false, 00:08:10.010 "zone_append": false, 00:08:10.010 "compare": false, 00:08:10.010 "compare_and_write": false, 00:08:10.010 "abort": true, 00:08:10.010 "seek_hole": false, 00:08:10.010 "seek_data": false, 00:08:10.010 "copy": true, 00:08:10.010 "nvme_iov_md": false 00:08:10.010 }, 00:08:10.010 "memory_domains": [ 00:08:10.010 { 00:08:10.010 "dma_device_id": "system", 00:08:10.010 "dma_device_type": 1 00:08:10.010 }, 00:08:10.010 { 00:08:10.010 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.010 "dma_device_type": 2 00:08:10.010 } 00:08:10.010 ], 00:08:10.010 "driver_specific": {} 00:08:10.010 } 00:08:10.010 ] 00:08:10.010 10:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.010 10:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:10.010 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:10.010 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:10.010 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:10.010 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.010 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.010 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.010 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.010 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.010 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.010 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.010 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.010 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.010 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.010 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.010 10:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.011 10:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.011 10:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.011 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.011 "name": "Existed_Raid", 00:08:10.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.011 "strip_size_kb": 64, 00:08:10.011 "state": "configuring", 00:08:10.011 "raid_level": "raid0", 00:08:10.011 "superblock": false, 00:08:10.011 "num_base_bdevs": 3, 00:08:10.011 "num_base_bdevs_discovered": 2, 00:08:10.011 "num_base_bdevs_operational": 3, 00:08:10.011 "base_bdevs_list": [ 00:08:10.011 { 00:08:10.011 "name": "BaseBdev1", 00:08:10.011 "uuid": "b35b3687-f4af-45e3-ba69-8a5153b33f2e", 00:08:10.011 "is_configured": true, 00:08:10.011 "data_offset": 0, 00:08:10.011 "data_size": 65536 00:08:10.011 }, 00:08:10.011 { 00:08:10.011 "name": "BaseBdev2", 00:08:10.011 "uuid": "969c2946-e871-4686-974c-7f91c8d9160b", 00:08:10.011 "is_configured": true, 00:08:10.011 "data_offset": 0, 00:08:10.011 "data_size": 65536 00:08:10.011 }, 00:08:10.011 { 00:08:10.011 "name": "BaseBdev3", 00:08:10.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.011 "is_configured": false, 00:08:10.011 "data_offset": 0, 00:08:10.011 "data_size": 0 00:08:10.011 } 00:08:10.011 ] 00:08:10.011 }' 00:08:10.011 10:19:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.011 10:19:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.581 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:10.581 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.581 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.581 [2024-11-19 10:19:24.110443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:10.581 [2024-11-19 10:19:24.110594] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:10.581 [2024-11-19 10:19:24.110617] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:10.581 [2024-11-19 10:19:24.110910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:10.581 [2024-11-19 10:19:24.111139] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:10.581 [2024-11-19 10:19:24.111152] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:10.581 [2024-11-19 10:19:24.111448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:10.581 BaseBdev3 00:08:10.581 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.581 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.582 [ 00:08:10.582 { 00:08:10.582 "name": "BaseBdev3", 00:08:10.582 "aliases": [ 00:08:10.582 "e68bc88f-b2c6-4295-b6ea-9598cb9b2b1d" 00:08:10.582 ], 00:08:10.582 "product_name": "Malloc disk", 00:08:10.582 "block_size": 512, 00:08:10.582 "num_blocks": 65536, 00:08:10.582 "uuid": "e68bc88f-b2c6-4295-b6ea-9598cb9b2b1d", 00:08:10.582 "assigned_rate_limits": { 00:08:10.582 "rw_ios_per_sec": 0, 00:08:10.582 "rw_mbytes_per_sec": 0, 00:08:10.582 "r_mbytes_per_sec": 0, 00:08:10.582 "w_mbytes_per_sec": 0 00:08:10.582 }, 00:08:10.582 "claimed": true, 00:08:10.582 "claim_type": "exclusive_write", 00:08:10.582 "zoned": false, 00:08:10.582 "supported_io_types": { 00:08:10.582 "read": true, 00:08:10.582 "write": true, 00:08:10.582 "unmap": true, 00:08:10.582 "flush": true, 00:08:10.582 "reset": true, 00:08:10.582 "nvme_admin": false, 00:08:10.582 "nvme_io": false, 00:08:10.582 "nvme_io_md": false, 00:08:10.582 "write_zeroes": true, 00:08:10.582 "zcopy": true, 00:08:10.582 "get_zone_info": false, 00:08:10.582 "zone_management": false, 00:08:10.582 "zone_append": false, 00:08:10.582 "compare": false, 00:08:10.582 "compare_and_write": false, 00:08:10.582 "abort": true, 00:08:10.582 "seek_hole": false, 00:08:10.582 "seek_data": false, 00:08:10.582 "copy": true, 00:08:10.582 "nvme_iov_md": false 00:08:10.582 }, 00:08:10.582 "memory_domains": [ 00:08:10.582 { 00:08:10.582 "dma_device_id": "system", 00:08:10.582 "dma_device_type": 1 00:08:10.582 }, 00:08:10.582 { 00:08:10.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:10.582 "dma_device_type": 2 00:08:10.582 } 00:08:10.582 ], 00:08:10.582 "driver_specific": {} 00:08:10.582 } 00:08:10.582 ] 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.582 "name": "Existed_Raid", 00:08:10.582 "uuid": "b0d8740c-9ab1-44fc-acfd-83c616562a76", 00:08:10.582 "strip_size_kb": 64, 00:08:10.582 "state": "online", 00:08:10.582 "raid_level": "raid0", 00:08:10.582 "superblock": false, 00:08:10.582 "num_base_bdevs": 3, 00:08:10.582 "num_base_bdevs_discovered": 3, 00:08:10.582 "num_base_bdevs_operational": 3, 00:08:10.582 "base_bdevs_list": [ 00:08:10.582 { 00:08:10.582 "name": "BaseBdev1", 00:08:10.582 "uuid": "b35b3687-f4af-45e3-ba69-8a5153b33f2e", 00:08:10.582 "is_configured": true, 00:08:10.582 "data_offset": 0, 00:08:10.582 "data_size": 65536 00:08:10.582 }, 00:08:10.582 { 00:08:10.582 "name": "BaseBdev2", 00:08:10.582 "uuid": "969c2946-e871-4686-974c-7f91c8d9160b", 00:08:10.582 "is_configured": true, 00:08:10.582 "data_offset": 0, 00:08:10.582 "data_size": 65536 00:08:10.582 }, 00:08:10.582 { 00:08:10.582 "name": "BaseBdev3", 00:08:10.582 "uuid": "e68bc88f-b2c6-4295-b6ea-9598cb9b2b1d", 00:08:10.582 "is_configured": true, 00:08:10.582 "data_offset": 0, 00:08:10.582 "data_size": 65536 00:08:10.582 } 00:08:10.582 ] 00:08:10.582 }' 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.582 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.842 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:10.842 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:10.842 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:10.842 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:10.842 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:10.842 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:10.842 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:10.842 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:10.842 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.842 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.842 [2024-11-19 10:19:24.602091] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:10.842 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.102 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:11.103 "name": "Existed_Raid", 00:08:11.103 "aliases": [ 00:08:11.103 "b0d8740c-9ab1-44fc-acfd-83c616562a76" 00:08:11.103 ], 00:08:11.103 "product_name": "Raid Volume", 00:08:11.103 "block_size": 512, 00:08:11.103 "num_blocks": 196608, 00:08:11.103 "uuid": "b0d8740c-9ab1-44fc-acfd-83c616562a76", 00:08:11.103 "assigned_rate_limits": { 00:08:11.103 "rw_ios_per_sec": 0, 00:08:11.103 "rw_mbytes_per_sec": 0, 00:08:11.103 "r_mbytes_per_sec": 0, 00:08:11.103 "w_mbytes_per_sec": 0 00:08:11.103 }, 00:08:11.103 "claimed": false, 00:08:11.103 "zoned": false, 00:08:11.103 "supported_io_types": { 00:08:11.103 "read": true, 00:08:11.103 "write": true, 00:08:11.103 "unmap": true, 00:08:11.103 "flush": true, 00:08:11.103 "reset": true, 00:08:11.103 "nvme_admin": false, 00:08:11.103 "nvme_io": false, 00:08:11.103 "nvme_io_md": false, 00:08:11.103 "write_zeroes": true, 00:08:11.103 "zcopy": false, 00:08:11.103 "get_zone_info": false, 00:08:11.103 "zone_management": false, 00:08:11.103 "zone_append": false, 00:08:11.103 "compare": false, 00:08:11.103 "compare_and_write": false, 00:08:11.103 "abort": false, 00:08:11.103 "seek_hole": false, 00:08:11.103 "seek_data": false, 00:08:11.103 "copy": false, 00:08:11.103 "nvme_iov_md": false 00:08:11.103 }, 00:08:11.103 "memory_domains": [ 00:08:11.103 { 00:08:11.103 "dma_device_id": "system", 00:08:11.103 "dma_device_type": 1 00:08:11.103 }, 00:08:11.103 { 00:08:11.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.103 "dma_device_type": 2 00:08:11.103 }, 00:08:11.103 { 00:08:11.103 "dma_device_id": "system", 00:08:11.103 "dma_device_type": 1 00:08:11.103 }, 00:08:11.103 { 00:08:11.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.103 "dma_device_type": 2 00:08:11.103 }, 00:08:11.103 { 00:08:11.103 "dma_device_id": "system", 00:08:11.103 "dma_device_type": 1 00:08:11.103 }, 00:08:11.103 { 00:08:11.103 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.103 "dma_device_type": 2 00:08:11.103 } 00:08:11.103 ], 00:08:11.103 "driver_specific": { 00:08:11.103 "raid": { 00:08:11.103 "uuid": "b0d8740c-9ab1-44fc-acfd-83c616562a76", 00:08:11.103 "strip_size_kb": 64, 00:08:11.103 "state": "online", 00:08:11.103 "raid_level": "raid0", 00:08:11.103 "superblock": false, 00:08:11.103 "num_base_bdevs": 3, 00:08:11.103 "num_base_bdevs_discovered": 3, 00:08:11.103 "num_base_bdevs_operational": 3, 00:08:11.103 "base_bdevs_list": [ 00:08:11.103 { 00:08:11.103 "name": "BaseBdev1", 00:08:11.103 "uuid": "b35b3687-f4af-45e3-ba69-8a5153b33f2e", 00:08:11.103 "is_configured": true, 00:08:11.103 "data_offset": 0, 00:08:11.103 "data_size": 65536 00:08:11.103 }, 00:08:11.103 { 00:08:11.103 "name": "BaseBdev2", 00:08:11.103 "uuid": "969c2946-e871-4686-974c-7f91c8d9160b", 00:08:11.103 "is_configured": true, 00:08:11.103 "data_offset": 0, 00:08:11.103 "data_size": 65536 00:08:11.103 }, 00:08:11.103 { 00:08:11.103 "name": "BaseBdev3", 00:08:11.103 "uuid": "e68bc88f-b2c6-4295-b6ea-9598cb9b2b1d", 00:08:11.103 "is_configured": true, 00:08:11.103 "data_offset": 0, 00:08:11.103 "data_size": 65536 00:08:11.103 } 00:08:11.103 ] 00:08:11.103 } 00:08:11.103 } 00:08:11.103 }' 00:08:11.103 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:11.103 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:11.103 BaseBdev2 00:08:11.103 BaseBdev3' 00:08:11.103 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.103 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:11.103 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.103 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.103 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:11.103 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.103 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.103 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.103 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.103 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.103 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.103 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.103 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:11.103 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.103 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.103 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.103 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.103 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.103 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.103 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.103 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:11.103 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.103 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.103 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.103 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.103 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.103 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:11.103 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.103 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.103 [2024-11-19 10:19:24.817456] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:11.103 [2024-11-19 10:19:24.817544] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:11.103 [2024-11-19 10:19:24.817642] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:11.364 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.364 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:11.364 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:11.364 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:11.364 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:11.364 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:11.364 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:11.364 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.364 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:11.364 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.364 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.364 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:11.364 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.364 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.364 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.364 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.364 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.364 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.364 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.364 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.364 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.364 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.364 "name": "Existed_Raid", 00:08:11.364 "uuid": "b0d8740c-9ab1-44fc-acfd-83c616562a76", 00:08:11.364 "strip_size_kb": 64, 00:08:11.364 "state": "offline", 00:08:11.364 "raid_level": "raid0", 00:08:11.364 "superblock": false, 00:08:11.364 "num_base_bdevs": 3, 00:08:11.364 "num_base_bdevs_discovered": 2, 00:08:11.364 "num_base_bdevs_operational": 2, 00:08:11.364 "base_bdevs_list": [ 00:08:11.364 { 00:08:11.364 "name": null, 00:08:11.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.364 "is_configured": false, 00:08:11.364 "data_offset": 0, 00:08:11.364 "data_size": 65536 00:08:11.364 }, 00:08:11.364 { 00:08:11.364 "name": "BaseBdev2", 00:08:11.364 "uuid": "969c2946-e871-4686-974c-7f91c8d9160b", 00:08:11.364 "is_configured": true, 00:08:11.364 "data_offset": 0, 00:08:11.364 "data_size": 65536 00:08:11.364 }, 00:08:11.364 { 00:08:11.364 "name": "BaseBdev3", 00:08:11.364 "uuid": "e68bc88f-b2c6-4295-b6ea-9598cb9b2b1d", 00:08:11.364 "is_configured": true, 00:08:11.364 "data_offset": 0, 00:08:11.364 "data_size": 65536 00:08:11.364 } 00:08:11.364 ] 00:08:11.364 }' 00:08:11.364 10:19:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.364 10:19:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.624 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:11.624 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:11.624 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.624 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:11.624 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.624 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.624 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.624 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:11.624 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:11.624 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:11.624 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.624 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.884 [2024-11-19 10:19:25.406868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:11.884 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.884 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:11.884 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:11.884 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.884 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:11.884 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.884 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.884 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.884 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:11.884 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:11.884 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:11.884 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.884 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.884 [2024-11-19 10:19:25.557472] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:11.884 [2024-11-19 10:19:25.557585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:11.884 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.884 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:11.884 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:11.884 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:11.884 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.884 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.884 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.884 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.146 BaseBdev2 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.146 [ 00:08:12.146 { 00:08:12.146 "name": "BaseBdev2", 00:08:12.146 "aliases": [ 00:08:12.146 "2030574a-3068-4782-863e-98d4b6f4effa" 00:08:12.146 ], 00:08:12.146 "product_name": "Malloc disk", 00:08:12.146 "block_size": 512, 00:08:12.146 "num_blocks": 65536, 00:08:12.146 "uuid": "2030574a-3068-4782-863e-98d4b6f4effa", 00:08:12.146 "assigned_rate_limits": { 00:08:12.146 "rw_ios_per_sec": 0, 00:08:12.146 "rw_mbytes_per_sec": 0, 00:08:12.146 "r_mbytes_per_sec": 0, 00:08:12.146 "w_mbytes_per_sec": 0 00:08:12.146 }, 00:08:12.146 "claimed": false, 00:08:12.146 "zoned": false, 00:08:12.146 "supported_io_types": { 00:08:12.146 "read": true, 00:08:12.146 "write": true, 00:08:12.146 "unmap": true, 00:08:12.146 "flush": true, 00:08:12.146 "reset": true, 00:08:12.146 "nvme_admin": false, 00:08:12.146 "nvme_io": false, 00:08:12.146 "nvme_io_md": false, 00:08:12.146 "write_zeroes": true, 00:08:12.146 "zcopy": true, 00:08:12.146 "get_zone_info": false, 00:08:12.146 "zone_management": false, 00:08:12.146 "zone_append": false, 00:08:12.146 "compare": false, 00:08:12.146 "compare_and_write": false, 00:08:12.146 "abort": true, 00:08:12.146 "seek_hole": false, 00:08:12.146 "seek_data": false, 00:08:12.146 "copy": true, 00:08:12.146 "nvme_iov_md": false 00:08:12.146 }, 00:08:12.146 "memory_domains": [ 00:08:12.146 { 00:08:12.146 "dma_device_id": "system", 00:08:12.146 "dma_device_type": 1 00:08:12.146 }, 00:08:12.146 { 00:08:12.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.146 "dma_device_type": 2 00:08:12.146 } 00:08:12.146 ], 00:08:12.146 "driver_specific": {} 00:08:12.146 } 00:08:12.146 ] 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.146 BaseBdev3 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.146 [ 00:08:12.146 { 00:08:12.146 "name": "BaseBdev3", 00:08:12.146 "aliases": [ 00:08:12.146 "ac3180a0-6345-48a0-a0e4-531949a04dfa" 00:08:12.146 ], 00:08:12.146 "product_name": "Malloc disk", 00:08:12.146 "block_size": 512, 00:08:12.146 "num_blocks": 65536, 00:08:12.146 "uuid": "ac3180a0-6345-48a0-a0e4-531949a04dfa", 00:08:12.146 "assigned_rate_limits": { 00:08:12.146 "rw_ios_per_sec": 0, 00:08:12.146 "rw_mbytes_per_sec": 0, 00:08:12.146 "r_mbytes_per_sec": 0, 00:08:12.146 "w_mbytes_per_sec": 0 00:08:12.146 }, 00:08:12.146 "claimed": false, 00:08:12.146 "zoned": false, 00:08:12.146 "supported_io_types": { 00:08:12.146 "read": true, 00:08:12.146 "write": true, 00:08:12.146 "unmap": true, 00:08:12.146 "flush": true, 00:08:12.146 "reset": true, 00:08:12.146 "nvme_admin": false, 00:08:12.146 "nvme_io": false, 00:08:12.146 "nvme_io_md": false, 00:08:12.146 "write_zeroes": true, 00:08:12.146 "zcopy": true, 00:08:12.146 "get_zone_info": false, 00:08:12.146 "zone_management": false, 00:08:12.146 "zone_append": false, 00:08:12.146 "compare": false, 00:08:12.146 "compare_and_write": false, 00:08:12.146 "abort": true, 00:08:12.146 "seek_hole": false, 00:08:12.146 "seek_data": false, 00:08:12.146 "copy": true, 00:08:12.146 "nvme_iov_md": false 00:08:12.146 }, 00:08:12.146 "memory_domains": [ 00:08:12.146 { 00:08:12.146 "dma_device_id": "system", 00:08:12.146 "dma_device_type": 1 00:08:12.146 }, 00:08:12.146 { 00:08:12.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.146 "dma_device_type": 2 00:08:12.146 } 00:08:12.146 ], 00:08:12.146 "driver_specific": {} 00:08:12.146 } 00:08:12.146 ] 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.146 [2024-11-19 10:19:25.860348] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:12.146 [2024-11-19 10:19:25.860443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:12.146 [2024-11-19 10:19:25.860494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:12.146 [2024-11-19 10:19:25.862293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.146 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.147 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.147 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.147 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.147 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.147 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.147 "name": "Existed_Raid", 00:08:12.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.147 "strip_size_kb": 64, 00:08:12.147 "state": "configuring", 00:08:12.147 "raid_level": "raid0", 00:08:12.147 "superblock": false, 00:08:12.147 "num_base_bdevs": 3, 00:08:12.147 "num_base_bdevs_discovered": 2, 00:08:12.147 "num_base_bdevs_operational": 3, 00:08:12.147 "base_bdevs_list": [ 00:08:12.147 { 00:08:12.147 "name": "BaseBdev1", 00:08:12.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.147 "is_configured": false, 00:08:12.147 "data_offset": 0, 00:08:12.147 "data_size": 0 00:08:12.147 }, 00:08:12.147 { 00:08:12.147 "name": "BaseBdev2", 00:08:12.147 "uuid": "2030574a-3068-4782-863e-98d4b6f4effa", 00:08:12.147 "is_configured": true, 00:08:12.147 "data_offset": 0, 00:08:12.147 "data_size": 65536 00:08:12.147 }, 00:08:12.147 { 00:08:12.147 "name": "BaseBdev3", 00:08:12.147 "uuid": "ac3180a0-6345-48a0-a0e4-531949a04dfa", 00:08:12.147 "is_configured": true, 00:08:12.147 "data_offset": 0, 00:08:12.147 "data_size": 65536 00:08:12.147 } 00:08:12.147 ] 00:08:12.147 }' 00:08:12.147 10:19:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.147 10:19:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.716 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:12.716 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.716 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.717 [2024-11-19 10:19:26.319611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:12.717 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.717 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:12.717 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.717 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.717 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.717 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.717 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:12.717 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.717 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.717 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.717 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.717 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.717 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.717 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.717 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.717 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.717 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.717 "name": "Existed_Raid", 00:08:12.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.717 "strip_size_kb": 64, 00:08:12.717 "state": "configuring", 00:08:12.717 "raid_level": "raid0", 00:08:12.717 "superblock": false, 00:08:12.717 "num_base_bdevs": 3, 00:08:12.717 "num_base_bdevs_discovered": 1, 00:08:12.717 "num_base_bdevs_operational": 3, 00:08:12.717 "base_bdevs_list": [ 00:08:12.717 { 00:08:12.717 "name": "BaseBdev1", 00:08:12.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.717 "is_configured": false, 00:08:12.717 "data_offset": 0, 00:08:12.717 "data_size": 0 00:08:12.717 }, 00:08:12.717 { 00:08:12.717 "name": null, 00:08:12.717 "uuid": "2030574a-3068-4782-863e-98d4b6f4effa", 00:08:12.717 "is_configured": false, 00:08:12.717 "data_offset": 0, 00:08:12.717 "data_size": 65536 00:08:12.717 }, 00:08:12.717 { 00:08:12.717 "name": "BaseBdev3", 00:08:12.717 "uuid": "ac3180a0-6345-48a0-a0e4-531949a04dfa", 00:08:12.717 "is_configured": true, 00:08:12.717 "data_offset": 0, 00:08:12.717 "data_size": 65536 00:08:12.717 } 00:08:12.717 ] 00:08:12.717 }' 00:08:12.717 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.717 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.287 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.287 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:13.287 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.287 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.287 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.287 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:13.287 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:13.287 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.287 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.287 [2024-11-19 10:19:26.886697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:13.287 BaseBdev1 00:08:13.287 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.287 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:13.287 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:13.287 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:13.287 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:13.287 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:13.287 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:13.287 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:13.287 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.287 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.287 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.287 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:13.287 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.287 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.287 [ 00:08:13.287 { 00:08:13.287 "name": "BaseBdev1", 00:08:13.287 "aliases": [ 00:08:13.287 "5c6c69c7-2345-41b2-a6e6-b9ba4ad289a7" 00:08:13.287 ], 00:08:13.287 "product_name": "Malloc disk", 00:08:13.287 "block_size": 512, 00:08:13.287 "num_blocks": 65536, 00:08:13.287 "uuid": "5c6c69c7-2345-41b2-a6e6-b9ba4ad289a7", 00:08:13.287 "assigned_rate_limits": { 00:08:13.287 "rw_ios_per_sec": 0, 00:08:13.287 "rw_mbytes_per_sec": 0, 00:08:13.287 "r_mbytes_per_sec": 0, 00:08:13.287 "w_mbytes_per_sec": 0 00:08:13.287 }, 00:08:13.287 "claimed": true, 00:08:13.287 "claim_type": "exclusive_write", 00:08:13.287 "zoned": false, 00:08:13.287 "supported_io_types": { 00:08:13.287 "read": true, 00:08:13.287 "write": true, 00:08:13.287 "unmap": true, 00:08:13.287 "flush": true, 00:08:13.287 "reset": true, 00:08:13.287 "nvme_admin": false, 00:08:13.287 "nvme_io": false, 00:08:13.287 "nvme_io_md": false, 00:08:13.287 "write_zeroes": true, 00:08:13.287 "zcopy": true, 00:08:13.287 "get_zone_info": false, 00:08:13.287 "zone_management": false, 00:08:13.287 "zone_append": false, 00:08:13.287 "compare": false, 00:08:13.287 "compare_and_write": false, 00:08:13.287 "abort": true, 00:08:13.287 "seek_hole": false, 00:08:13.287 "seek_data": false, 00:08:13.287 "copy": true, 00:08:13.287 "nvme_iov_md": false 00:08:13.288 }, 00:08:13.288 "memory_domains": [ 00:08:13.288 { 00:08:13.288 "dma_device_id": "system", 00:08:13.288 "dma_device_type": 1 00:08:13.288 }, 00:08:13.288 { 00:08:13.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.288 "dma_device_type": 2 00:08:13.288 } 00:08:13.288 ], 00:08:13.288 "driver_specific": {} 00:08:13.288 } 00:08:13.288 ] 00:08:13.288 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.288 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:13.288 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:13.288 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.288 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.288 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.288 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.288 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.288 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.288 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.288 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.288 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.288 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.288 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.288 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.288 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.288 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.288 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.288 "name": "Existed_Raid", 00:08:13.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.288 "strip_size_kb": 64, 00:08:13.288 "state": "configuring", 00:08:13.288 "raid_level": "raid0", 00:08:13.288 "superblock": false, 00:08:13.288 "num_base_bdevs": 3, 00:08:13.288 "num_base_bdevs_discovered": 2, 00:08:13.288 "num_base_bdevs_operational": 3, 00:08:13.288 "base_bdevs_list": [ 00:08:13.288 { 00:08:13.288 "name": "BaseBdev1", 00:08:13.288 "uuid": "5c6c69c7-2345-41b2-a6e6-b9ba4ad289a7", 00:08:13.288 "is_configured": true, 00:08:13.288 "data_offset": 0, 00:08:13.288 "data_size": 65536 00:08:13.288 }, 00:08:13.288 { 00:08:13.288 "name": null, 00:08:13.288 "uuid": "2030574a-3068-4782-863e-98d4b6f4effa", 00:08:13.288 "is_configured": false, 00:08:13.288 "data_offset": 0, 00:08:13.288 "data_size": 65536 00:08:13.288 }, 00:08:13.288 { 00:08:13.288 "name": "BaseBdev3", 00:08:13.288 "uuid": "ac3180a0-6345-48a0-a0e4-531949a04dfa", 00:08:13.288 "is_configured": true, 00:08:13.288 "data_offset": 0, 00:08:13.288 "data_size": 65536 00:08:13.288 } 00:08:13.288 ] 00:08:13.288 }' 00:08:13.288 10:19:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.288 10:19:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.858 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.858 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:13.858 10:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.858 10:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.858 10:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.858 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:13.858 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:13.858 10:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.858 10:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.858 [2024-11-19 10:19:27.425814] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:13.858 10:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.858 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:13.858 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.858 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:13.858 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.858 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.858 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:13.858 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.858 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.859 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.859 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.859 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.859 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.859 10:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.859 10:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.859 10:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.859 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.859 "name": "Existed_Raid", 00:08:13.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.859 "strip_size_kb": 64, 00:08:13.859 "state": "configuring", 00:08:13.859 "raid_level": "raid0", 00:08:13.859 "superblock": false, 00:08:13.859 "num_base_bdevs": 3, 00:08:13.859 "num_base_bdevs_discovered": 1, 00:08:13.859 "num_base_bdevs_operational": 3, 00:08:13.859 "base_bdevs_list": [ 00:08:13.859 { 00:08:13.859 "name": "BaseBdev1", 00:08:13.859 "uuid": "5c6c69c7-2345-41b2-a6e6-b9ba4ad289a7", 00:08:13.859 "is_configured": true, 00:08:13.859 "data_offset": 0, 00:08:13.859 "data_size": 65536 00:08:13.859 }, 00:08:13.859 { 00:08:13.859 "name": null, 00:08:13.859 "uuid": "2030574a-3068-4782-863e-98d4b6f4effa", 00:08:13.859 "is_configured": false, 00:08:13.859 "data_offset": 0, 00:08:13.859 "data_size": 65536 00:08:13.859 }, 00:08:13.859 { 00:08:13.859 "name": null, 00:08:13.859 "uuid": "ac3180a0-6345-48a0-a0e4-531949a04dfa", 00:08:13.859 "is_configured": false, 00:08:13.859 "data_offset": 0, 00:08:13.859 "data_size": 65536 00:08:13.859 } 00:08:13.859 ] 00:08:13.859 }' 00:08:13.859 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.859 10:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.119 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.119 10:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.119 10:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.119 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:14.119 10:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.119 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:14.379 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:14.379 10:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.379 10:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.379 [2024-11-19 10:19:27.905046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:14.379 10:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.379 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.379 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.379 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.380 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.380 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.380 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.380 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.380 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.380 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.380 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.380 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.380 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.380 10:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.380 10:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.380 10:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.380 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.380 "name": "Existed_Raid", 00:08:14.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.380 "strip_size_kb": 64, 00:08:14.380 "state": "configuring", 00:08:14.380 "raid_level": "raid0", 00:08:14.380 "superblock": false, 00:08:14.380 "num_base_bdevs": 3, 00:08:14.380 "num_base_bdevs_discovered": 2, 00:08:14.380 "num_base_bdevs_operational": 3, 00:08:14.380 "base_bdevs_list": [ 00:08:14.380 { 00:08:14.380 "name": "BaseBdev1", 00:08:14.380 "uuid": "5c6c69c7-2345-41b2-a6e6-b9ba4ad289a7", 00:08:14.380 "is_configured": true, 00:08:14.380 "data_offset": 0, 00:08:14.380 "data_size": 65536 00:08:14.380 }, 00:08:14.380 { 00:08:14.380 "name": null, 00:08:14.380 "uuid": "2030574a-3068-4782-863e-98d4b6f4effa", 00:08:14.380 "is_configured": false, 00:08:14.380 "data_offset": 0, 00:08:14.380 "data_size": 65536 00:08:14.380 }, 00:08:14.380 { 00:08:14.380 "name": "BaseBdev3", 00:08:14.380 "uuid": "ac3180a0-6345-48a0-a0e4-531949a04dfa", 00:08:14.380 "is_configured": true, 00:08:14.380 "data_offset": 0, 00:08:14.380 "data_size": 65536 00:08:14.380 } 00:08:14.380 ] 00:08:14.380 }' 00:08:14.380 10:19:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.380 10:19:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.640 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:14.640 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.640 10:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.640 10:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.640 10:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.640 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:14.640 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:14.640 10:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.640 10:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.901 [2024-11-19 10:19:28.420215] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:14.901 10:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.901 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.901 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.901 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.901 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.901 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.901 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.901 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.901 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.901 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.901 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.901 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.901 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.901 10:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.901 10:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.901 10:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.901 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.901 "name": "Existed_Raid", 00:08:14.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.901 "strip_size_kb": 64, 00:08:14.901 "state": "configuring", 00:08:14.901 "raid_level": "raid0", 00:08:14.901 "superblock": false, 00:08:14.901 "num_base_bdevs": 3, 00:08:14.901 "num_base_bdevs_discovered": 1, 00:08:14.901 "num_base_bdevs_operational": 3, 00:08:14.901 "base_bdevs_list": [ 00:08:14.901 { 00:08:14.901 "name": null, 00:08:14.901 "uuid": "5c6c69c7-2345-41b2-a6e6-b9ba4ad289a7", 00:08:14.901 "is_configured": false, 00:08:14.901 "data_offset": 0, 00:08:14.901 "data_size": 65536 00:08:14.901 }, 00:08:14.901 { 00:08:14.901 "name": null, 00:08:14.901 "uuid": "2030574a-3068-4782-863e-98d4b6f4effa", 00:08:14.901 "is_configured": false, 00:08:14.901 "data_offset": 0, 00:08:14.901 "data_size": 65536 00:08:14.901 }, 00:08:14.901 { 00:08:14.901 "name": "BaseBdev3", 00:08:14.901 "uuid": "ac3180a0-6345-48a0-a0e4-531949a04dfa", 00:08:14.901 "is_configured": true, 00:08:14.901 "data_offset": 0, 00:08:14.901 "data_size": 65536 00:08:14.901 } 00:08:14.901 ] 00:08:14.901 }' 00:08:14.901 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.901 10:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.161 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.161 10:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.161 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:15.161 10:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.161 10:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.421 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:15.421 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:15.421 10:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.421 10:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.421 [2024-11-19 10:19:28.965906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:15.421 10:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.421 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.421 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.421 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.421 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.421 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.421 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.421 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.421 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.421 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.421 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.421 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.421 10:19:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.421 10:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.421 10:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.421 10:19:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.421 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.421 "name": "Existed_Raid", 00:08:15.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.421 "strip_size_kb": 64, 00:08:15.421 "state": "configuring", 00:08:15.421 "raid_level": "raid0", 00:08:15.421 "superblock": false, 00:08:15.421 "num_base_bdevs": 3, 00:08:15.421 "num_base_bdevs_discovered": 2, 00:08:15.421 "num_base_bdevs_operational": 3, 00:08:15.421 "base_bdevs_list": [ 00:08:15.421 { 00:08:15.421 "name": null, 00:08:15.421 "uuid": "5c6c69c7-2345-41b2-a6e6-b9ba4ad289a7", 00:08:15.421 "is_configured": false, 00:08:15.421 "data_offset": 0, 00:08:15.421 "data_size": 65536 00:08:15.421 }, 00:08:15.421 { 00:08:15.421 "name": "BaseBdev2", 00:08:15.421 "uuid": "2030574a-3068-4782-863e-98d4b6f4effa", 00:08:15.421 "is_configured": true, 00:08:15.421 "data_offset": 0, 00:08:15.421 "data_size": 65536 00:08:15.421 }, 00:08:15.421 { 00:08:15.421 "name": "BaseBdev3", 00:08:15.421 "uuid": "ac3180a0-6345-48a0-a0e4-531949a04dfa", 00:08:15.421 "is_configured": true, 00:08:15.421 "data_offset": 0, 00:08:15.421 "data_size": 65536 00:08:15.421 } 00:08:15.421 ] 00:08:15.421 }' 00:08:15.421 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.422 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.680 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.680 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:15.680 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.680 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.680 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.680 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:15.680 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.680 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.680 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.680 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:15.680 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.939 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5c6c69c7-2345-41b2-a6e6-b9ba4ad289a7 00:08:15.939 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.939 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.939 [2024-11-19 10:19:29.500398] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:15.939 [2024-11-19 10:19:29.500511] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:15.939 [2024-11-19 10:19:29.500543] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:15.939 [2024-11-19 10:19:29.500850] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:15.939 [2024-11-19 10:19:29.501077] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:15.939 [2024-11-19 10:19:29.501127] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:15.939 [2024-11-19 10:19:29.501444] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.939 NewBaseBdev 00:08:15.939 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.939 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:15.939 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:15.939 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:15.939 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:15.939 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:15.939 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:15.939 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:15.939 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.939 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.939 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.939 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:15.939 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.939 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.939 [ 00:08:15.939 { 00:08:15.939 "name": "NewBaseBdev", 00:08:15.939 "aliases": [ 00:08:15.939 "5c6c69c7-2345-41b2-a6e6-b9ba4ad289a7" 00:08:15.939 ], 00:08:15.939 "product_name": "Malloc disk", 00:08:15.939 "block_size": 512, 00:08:15.939 "num_blocks": 65536, 00:08:15.939 "uuid": "5c6c69c7-2345-41b2-a6e6-b9ba4ad289a7", 00:08:15.939 "assigned_rate_limits": { 00:08:15.939 "rw_ios_per_sec": 0, 00:08:15.939 "rw_mbytes_per_sec": 0, 00:08:15.939 "r_mbytes_per_sec": 0, 00:08:15.939 "w_mbytes_per_sec": 0 00:08:15.939 }, 00:08:15.939 "claimed": true, 00:08:15.940 "claim_type": "exclusive_write", 00:08:15.940 "zoned": false, 00:08:15.940 "supported_io_types": { 00:08:15.940 "read": true, 00:08:15.940 "write": true, 00:08:15.940 "unmap": true, 00:08:15.940 "flush": true, 00:08:15.940 "reset": true, 00:08:15.940 "nvme_admin": false, 00:08:15.940 "nvme_io": false, 00:08:15.940 "nvme_io_md": false, 00:08:15.940 "write_zeroes": true, 00:08:15.940 "zcopy": true, 00:08:15.940 "get_zone_info": false, 00:08:15.940 "zone_management": false, 00:08:15.940 "zone_append": false, 00:08:15.940 "compare": false, 00:08:15.940 "compare_and_write": false, 00:08:15.940 "abort": true, 00:08:15.940 "seek_hole": false, 00:08:15.940 "seek_data": false, 00:08:15.940 "copy": true, 00:08:15.940 "nvme_iov_md": false 00:08:15.940 }, 00:08:15.940 "memory_domains": [ 00:08:15.940 { 00:08:15.940 "dma_device_id": "system", 00:08:15.940 "dma_device_type": 1 00:08:15.940 }, 00:08:15.940 { 00:08:15.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.940 "dma_device_type": 2 00:08:15.940 } 00:08:15.940 ], 00:08:15.940 "driver_specific": {} 00:08:15.940 } 00:08:15.940 ] 00:08:15.940 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.940 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:15.940 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:15.940 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.940 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.940 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.940 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.940 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.940 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.940 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.940 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.940 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.940 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.940 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.940 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.940 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.940 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.940 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.940 "name": "Existed_Raid", 00:08:15.940 "uuid": "174d2784-5831-42af-a041-41e4c0ca2d26", 00:08:15.940 "strip_size_kb": 64, 00:08:15.940 "state": "online", 00:08:15.940 "raid_level": "raid0", 00:08:15.940 "superblock": false, 00:08:15.940 "num_base_bdevs": 3, 00:08:15.940 "num_base_bdevs_discovered": 3, 00:08:15.940 "num_base_bdevs_operational": 3, 00:08:15.940 "base_bdevs_list": [ 00:08:15.940 { 00:08:15.940 "name": "NewBaseBdev", 00:08:15.940 "uuid": "5c6c69c7-2345-41b2-a6e6-b9ba4ad289a7", 00:08:15.940 "is_configured": true, 00:08:15.940 "data_offset": 0, 00:08:15.940 "data_size": 65536 00:08:15.940 }, 00:08:15.940 { 00:08:15.940 "name": "BaseBdev2", 00:08:15.940 "uuid": "2030574a-3068-4782-863e-98d4b6f4effa", 00:08:15.940 "is_configured": true, 00:08:15.940 "data_offset": 0, 00:08:15.940 "data_size": 65536 00:08:15.940 }, 00:08:15.940 { 00:08:15.940 "name": "BaseBdev3", 00:08:15.940 "uuid": "ac3180a0-6345-48a0-a0e4-531949a04dfa", 00:08:15.940 "is_configured": true, 00:08:15.940 "data_offset": 0, 00:08:15.940 "data_size": 65536 00:08:15.940 } 00:08:15.940 ] 00:08:15.940 }' 00:08:15.940 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.940 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.510 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:16.510 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:16.510 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:16.510 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:16.510 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:16.510 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:16.510 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:16.510 10:19:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:16.510 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.510 10:19:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.510 [2024-11-19 10:19:29.999913] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:16.510 10:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.510 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:16.510 "name": "Existed_Raid", 00:08:16.510 "aliases": [ 00:08:16.510 "174d2784-5831-42af-a041-41e4c0ca2d26" 00:08:16.510 ], 00:08:16.510 "product_name": "Raid Volume", 00:08:16.510 "block_size": 512, 00:08:16.510 "num_blocks": 196608, 00:08:16.510 "uuid": "174d2784-5831-42af-a041-41e4c0ca2d26", 00:08:16.510 "assigned_rate_limits": { 00:08:16.510 "rw_ios_per_sec": 0, 00:08:16.510 "rw_mbytes_per_sec": 0, 00:08:16.510 "r_mbytes_per_sec": 0, 00:08:16.510 "w_mbytes_per_sec": 0 00:08:16.510 }, 00:08:16.510 "claimed": false, 00:08:16.510 "zoned": false, 00:08:16.510 "supported_io_types": { 00:08:16.510 "read": true, 00:08:16.510 "write": true, 00:08:16.510 "unmap": true, 00:08:16.510 "flush": true, 00:08:16.510 "reset": true, 00:08:16.510 "nvme_admin": false, 00:08:16.510 "nvme_io": false, 00:08:16.510 "nvme_io_md": false, 00:08:16.510 "write_zeroes": true, 00:08:16.510 "zcopy": false, 00:08:16.510 "get_zone_info": false, 00:08:16.510 "zone_management": false, 00:08:16.510 "zone_append": false, 00:08:16.510 "compare": false, 00:08:16.510 "compare_and_write": false, 00:08:16.510 "abort": false, 00:08:16.510 "seek_hole": false, 00:08:16.510 "seek_data": false, 00:08:16.510 "copy": false, 00:08:16.510 "nvme_iov_md": false 00:08:16.510 }, 00:08:16.510 "memory_domains": [ 00:08:16.510 { 00:08:16.510 "dma_device_id": "system", 00:08:16.510 "dma_device_type": 1 00:08:16.510 }, 00:08:16.510 { 00:08:16.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.510 "dma_device_type": 2 00:08:16.510 }, 00:08:16.510 { 00:08:16.510 "dma_device_id": "system", 00:08:16.510 "dma_device_type": 1 00:08:16.510 }, 00:08:16.510 { 00:08:16.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.510 "dma_device_type": 2 00:08:16.510 }, 00:08:16.510 { 00:08:16.510 "dma_device_id": "system", 00:08:16.510 "dma_device_type": 1 00:08:16.510 }, 00:08:16.510 { 00:08:16.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.510 "dma_device_type": 2 00:08:16.510 } 00:08:16.510 ], 00:08:16.510 "driver_specific": { 00:08:16.510 "raid": { 00:08:16.510 "uuid": "174d2784-5831-42af-a041-41e4c0ca2d26", 00:08:16.510 "strip_size_kb": 64, 00:08:16.510 "state": "online", 00:08:16.510 "raid_level": "raid0", 00:08:16.510 "superblock": false, 00:08:16.510 "num_base_bdevs": 3, 00:08:16.510 "num_base_bdevs_discovered": 3, 00:08:16.510 "num_base_bdevs_operational": 3, 00:08:16.510 "base_bdevs_list": [ 00:08:16.510 { 00:08:16.510 "name": "NewBaseBdev", 00:08:16.510 "uuid": "5c6c69c7-2345-41b2-a6e6-b9ba4ad289a7", 00:08:16.510 "is_configured": true, 00:08:16.510 "data_offset": 0, 00:08:16.510 "data_size": 65536 00:08:16.510 }, 00:08:16.510 { 00:08:16.510 "name": "BaseBdev2", 00:08:16.510 "uuid": "2030574a-3068-4782-863e-98d4b6f4effa", 00:08:16.510 "is_configured": true, 00:08:16.510 "data_offset": 0, 00:08:16.510 "data_size": 65536 00:08:16.510 }, 00:08:16.510 { 00:08:16.510 "name": "BaseBdev3", 00:08:16.510 "uuid": "ac3180a0-6345-48a0-a0e4-531949a04dfa", 00:08:16.510 "is_configured": true, 00:08:16.510 "data_offset": 0, 00:08:16.510 "data_size": 65536 00:08:16.510 } 00:08:16.510 ] 00:08:16.510 } 00:08:16.510 } 00:08:16.510 }' 00:08:16.510 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:16.510 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:16.510 BaseBdev2 00:08:16.510 BaseBdev3' 00:08:16.510 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.510 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:16.510 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.510 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:16.510 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.510 10:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.510 10:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.510 10:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.511 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.511 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.511 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.511 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:16.511 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.511 10:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.511 10:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.511 10:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.511 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.511 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.511 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:16.511 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:16.511 10:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.511 10:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.511 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:16.511 10:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.511 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:16.511 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:16.511 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:16.511 10:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.511 10:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.511 [2024-11-19 10:19:30.279200] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:16.511 [2024-11-19 10:19:30.279232] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:16.511 [2024-11-19 10:19:30.279319] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.511 [2024-11-19 10:19:30.279379] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:16.511 [2024-11-19 10:19:30.279394] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:16.511 10:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.511 10:19:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63653 00:08:16.511 10:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63653 ']' 00:08:16.511 10:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63653 00:08:16.511 10:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:16.770 10:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:16.770 10:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63653 00:08:16.770 killing process with pid 63653 00:08:16.770 10:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:16.770 10:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:16.770 10:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63653' 00:08:16.770 10:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63653 00:08:16.770 [2024-11-19 10:19:30.327248] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:16.770 10:19:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63653 00:08:17.086 [2024-11-19 10:19:30.622487] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:18.051 10:19:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:18.051 00:08:18.051 real 0m10.590s 00:08:18.051 user 0m16.904s 00:08:18.051 sys 0m1.825s 00:08:18.051 10:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.051 ************************************ 00:08:18.051 END TEST raid_state_function_test 00:08:18.051 ************************************ 00:08:18.051 10:19:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.051 10:19:31 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:18.051 10:19:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:18.052 10:19:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.052 10:19:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:18.052 ************************************ 00:08:18.052 START TEST raid_state_function_test_sb 00:08:18.052 ************************************ 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64274 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64274' 00:08:18.052 Process raid pid: 64274 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64274 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64274 ']' 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.052 10:19:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.311 [2024-11-19 10:19:31.867302] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:18.311 [2024-11-19 10:19:31.867498] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.311 [2024-11-19 10:19:32.039637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.570 [2024-11-19 10:19:32.155529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.829 [2024-11-19 10:19:32.360227] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.829 [2024-11-19 10:19:32.360267] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:19.089 10:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:19.089 10:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:19.089 10:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:19.089 10:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.089 10:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.089 [2024-11-19 10:19:32.693219] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:19.089 [2024-11-19 10:19:32.693286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:19.089 [2024-11-19 10:19:32.693311] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:19.089 [2024-11-19 10:19:32.693340] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:19.089 [2024-11-19 10:19:32.693348] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:19.089 [2024-11-19 10:19:32.693360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:19.089 10:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.089 10:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:19.089 10:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.089 10:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.089 10:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.089 10:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.089 10:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.089 10:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.089 10:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.089 10:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.089 10:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.089 10:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.089 10:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.089 10:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.089 10:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.090 10:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.090 10:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.090 "name": "Existed_Raid", 00:08:19.090 "uuid": "89c0b99c-9903-4f24-b6a2-c7ec02e4e068", 00:08:19.090 "strip_size_kb": 64, 00:08:19.090 "state": "configuring", 00:08:19.090 "raid_level": "raid0", 00:08:19.090 "superblock": true, 00:08:19.090 "num_base_bdevs": 3, 00:08:19.090 "num_base_bdevs_discovered": 0, 00:08:19.090 "num_base_bdevs_operational": 3, 00:08:19.090 "base_bdevs_list": [ 00:08:19.090 { 00:08:19.090 "name": "BaseBdev1", 00:08:19.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.090 "is_configured": false, 00:08:19.090 "data_offset": 0, 00:08:19.090 "data_size": 0 00:08:19.090 }, 00:08:19.090 { 00:08:19.090 "name": "BaseBdev2", 00:08:19.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.090 "is_configured": false, 00:08:19.090 "data_offset": 0, 00:08:19.090 "data_size": 0 00:08:19.090 }, 00:08:19.090 { 00:08:19.090 "name": "BaseBdev3", 00:08:19.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.090 "is_configured": false, 00:08:19.090 "data_offset": 0, 00:08:19.090 "data_size": 0 00:08:19.090 } 00:08:19.090 ] 00:08:19.090 }' 00:08:19.090 10:19:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.090 10:19:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.660 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:19.660 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.660 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.660 [2024-11-19 10:19:33.152410] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:19.660 [2024-11-19 10:19:33.152514] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:19.660 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.660 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:19.660 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.660 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.660 [2024-11-19 10:19:33.160392] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:19.660 [2024-11-19 10:19:33.160495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:19.660 [2024-11-19 10:19:33.160530] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:19.660 [2024-11-19 10:19:33.160560] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:19.660 [2024-11-19 10:19:33.160583] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:19.660 [2024-11-19 10:19:33.160612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:19.660 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.660 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:19.660 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.660 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.660 [2024-11-19 10:19:33.203519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:19.660 BaseBdev1 00:08:19.660 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.660 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:19.660 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:19.660 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:19.660 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:19.660 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:19.660 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:19.660 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:19.661 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.661 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.661 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.661 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:19.661 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.661 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.661 [ 00:08:19.661 { 00:08:19.661 "name": "BaseBdev1", 00:08:19.661 "aliases": [ 00:08:19.661 "61666389-78fe-4b13-89c2-1a291bb59576" 00:08:19.661 ], 00:08:19.661 "product_name": "Malloc disk", 00:08:19.661 "block_size": 512, 00:08:19.661 "num_blocks": 65536, 00:08:19.661 "uuid": "61666389-78fe-4b13-89c2-1a291bb59576", 00:08:19.661 "assigned_rate_limits": { 00:08:19.661 "rw_ios_per_sec": 0, 00:08:19.661 "rw_mbytes_per_sec": 0, 00:08:19.661 "r_mbytes_per_sec": 0, 00:08:19.661 "w_mbytes_per_sec": 0 00:08:19.661 }, 00:08:19.661 "claimed": true, 00:08:19.661 "claim_type": "exclusive_write", 00:08:19.661 "zoned": false, 00:08:19.661 "supported_io_types": { 00:08:19.661 "read": true, 00:08:19.661 "write": true, 00:08:19.661 "unmap": true, 00:08:19.661 "flush": true, 00:08:19.661 "reset": true, 00:08:19.661 "nvme_admin": false, 00:08:19.661 "nvme_io": false, 00:08:19.661 "nvme_io_md": false, 00:08:19.661 "write_zeroes": true, 00:08:19.661 "zcopy": true, 00:08:19.661 "get_zone_info": false, 00:08:19.661 "zone_management": false, 00:08:19.661 "zone_append": false, 00:08:19.661 "compare": false, 00:08:19.661 "compare_and_write": false, 00:08:19.661 "abort": true, 00:08:19.661 "seek_hole": false, 00:08:19.661 "seek_data": false, 00:08:19.661 "copy": true, 00:08:19.661 "nvme_iov_md": false 00:08:19.661 }, 00:08:19.661 "memory_domains": [ 00:08:19.661 { 00:08:19.661 "dma_device_id": "system", 00:08:19.661 "dma_device_type": 1 00:08:19.661 }, 00:08:19.661 { 00:08:19.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.661 "dma_device_type": 2 00:08:19.661 } 00:08:19.661 ], 00:08:19.661 "driver_specific": {} 00:08:19.661 } 00:08:19.661 ] 00:08:19.661 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.661 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:19.661 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:19.661 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.661 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.661 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.661 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.661 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.661 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.661 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.661 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.661 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.661 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.661 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.661 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.661 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.661 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.661 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.661 "name": "Existed_Raid", 00:08:19.661 "uuid": "cf9602bc-85e5-412c-87c0-b80cf0ab1207", 00:08:19.661 "strip_size_kb": 64, 00:08:19.661 "state": "configuring", 00:08:19.661 "raid_level": "raid0", 00:08:19.661 "superblock": true, 00:08:19.661 "num_base_bdevs": 3, 00:08:19.661 "num_base_bdevs_discovered": 1, 00:08:19.661 "num_base_bdevs_operational": 3, 00:08:19.661 "base_bdevs_list": [ 00:08:19.661 { 00:08:19.661 "name": "BaseBdev1", 00:08:19.661 "uuid": "61666389-78fe-4b13-89c2-1a291bb59576", 00:08:19.661 "is_configured": true, 00:08:19.661 "data_offset": 2048, 00:08:19.661 "data_size": 63488 00:08:19.661 }, 00:08:19.661 { 00:08:19.661 "name": "BaseBdev2", 00:08:19.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.661 "is_configured": false, 00:08:19.661 "data_offset": 0, 00:08:19.661 "data_size": 0 00:08:19.661 }, 00:08:19.661 { 00:08:19.661 "name": "BaseBdev3", 00:08:19.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:19.661 "is_configured": false, 00:08:19.661 "data_offset": 0, 00:08:19.661 "data_size": 0 00:08:19.661 } 00:08:19.661 ] 00:08:19.661 }' 00:08:19.661 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.661 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.922 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:19.922 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.922 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.922 [2024-11-19 10:19:33.662812] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:19.922 [2024-11-19 10:19:33.662878] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:19.922 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.922 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:19.922 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.922 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.922 [2024-11-19 10:19:33.670852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:19.922 [2024-11-19 10:19:33.672810] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:19.922 [2024-11-19 10:19:33.672864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:19.922 [2024-11-19 10:19:33.672877] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:19.922 [2024-11-19 10:19:33.672888] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:19.922 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.922 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:19.922 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:19.922 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:19.922 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:19.922 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:19.922 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:19.922 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.922 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:19.922 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.922 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.922 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.922 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.922 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.922 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.922 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.922 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:19.922 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.182 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.182 "name": "Existed_Raid", 00:08:20.182 "uuid": "e60810ff-d830-434d-995c-df31bd1b37fd", 00:08:20.182 "strip_size_kb": 64, 00:08:20.182 "state": "configuring", 00:08:20.182 "raid_level": "raid0", 00:08:20.182 "superblock": true, 00:08:20.182 "num_base_bdevs": 3, 00:08:20.182 "num_base_bdevs_discovered": 1, 00:08:20.182 "num_base_bdevs_operational": 3, 00:08:20.182 "base_bdevs_list": [ 00:08:20.182 { 00:08:20.182 "name": "BaseBdev1", 00:08:20.182 "uuid": "61666389-78fe-4b13-89c2-1a291bb59576", 00:08:20.182 "is_configured": true, 00:08:20.182 "data_offset": 2048, 00:08:20.182 "data_size": 63488 00:08:20.182 }, 00:08:20.182 { 00:08:20.182 "name": "BaseBdev2", 00:08:20.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.182 "is_configured": false, 00:08:20.182 "data_offset": 0, 00:08:20.182 "data_size": 0 00:08:20.182 }, 00:08:20.182 { 00:08:20.182 "name": "BaseBdev3", 00:08:20.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.182 "is_configured": false, 00:08:20.182 "data_offset": 0, 00:08:20.182 "data_size": 0 00:08:20.182 } 00:08:20.182 ] 00:08:20.182 }' 00:08:20.182 10:19:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.182 10:19:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.442 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:20.442 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.442 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.442 [2024-11-19 10:19:34.182298] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:20.442 BaseBdev2 00:08:20.442 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.442 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:20.442 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:20.442 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:20.442 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:20.442 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:20.442 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:20.442 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:20.442 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.443 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.443 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.443 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:20.443 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.443 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.443 [ 00:08:20.443 { 00:08:20.443 "name": "BaseBdev2", 00:08:20.443 "aliases": [ 00:08:20.443 "9ce8923d-857b-4f5c-9144-497fadf68085" 00:08:20.443 ], 00:08:20.443 "product_name": "Malloc disk", 00:08:20.443 "block_size": 512, 00:08:20.443 "num_blocks": 65536, 00:08:20.443 "uuid": "9ce8923d-857b-4f5c-9144-497fadf68085", 00:08:20.443 "assigned_rate_limits": { 00:08:20.443 "rw_ios_per_sec": 0, 00:08:20.443 "rw_mbytes_per_sec": 0, 00:08:20.443 "r_mbytes_per_sec": 0, 00:08:20.443 "w_mbytes_per_sec": 0 00:08:20.443 }, 00:08:20.443 "claimed": true, 00:08:20.443 "claim_type": "exclusive_write", 00:08:20.443 "zoned": false, 00:08:20.443 "supported_io_types": { 00:08:20.443 "read": true, 00:08:20.443 "write": true, 00:08:20.443 "unmap": true, 00:08:20.443 "flush": true, 00:08:20.443 "reset": true, 00:08:20.443 "nvme_admin": false, 00:08:20.443 "nvme_io": false, 00:08:20.443 "nvme_io_md": false, 00:08:20.443 "write_zeroes": true, 00:08:20.443 "zcopy": true, 00:08:20.443 "get_zone_info": false, 00:08:20.443 "zone_management": false, 00:08:20.443 "zone_append": false, 00:08:20.443 "compare": false, 00:08:20.443 "compare_and_write": false, 00:08:20.443 "abort": true, 00:08:20.443 "seek_hole": false, 00:08:20.443 "seek_data": false, 00:08:20.443 "copy": true, 00:08:20.443 "nvme_iov_md": false 00:08:20.443 }, 00:08:20.443 "memory_domains": [ 00:08:20.443 { 00:08:20.443 "dma_device_id": "system", 00:08:20.443 "dma_device_type": 1 00:08:20.443 }, 00:08:20.443 { 00:08:20.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.443 "dma_device_type": 2 00:08:20.443 } 00:08:20.443 ], 00:08:20.443 "driver_specific": {} 00:08:20.443 } 00:08:20.443 ] 00:08:20.443 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.443 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:20.443 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:20.443 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:20.443 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:20.443 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:20.443 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:20.443 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.443 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.443 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.443 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.443 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.702 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.702 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.702 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.702 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.702 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.702 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.702 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.702 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:20.702 "name": "Existed_Raid", 00:08:20.702 "uuid": "e60810ff-d830-434d-995c-df31bd1b37fd", 00:08:20.702 "strip_size_kb": 64, 00:08:20.702 "state": "configuring", 00:08:20.702 "raid_level": "raid0", 00:08:20.702 "superblock": true, 00:08:20.702 "num_base_bdevs": 3, 00:08:20.702 "num_base_bdevs_discovered": 2, 00:08:20.702 "num_base_bdevs_operational": 3, 00:08:20.702 "base_bdevs_list": [ 00:08:20.702 { 00:08:20.702 "name": "BaseBdev1", 00:08:20.702 "uuid": "61666389-78fe-4b13-89c2-1a291bb59576", 00:08:20.702 "is_configured": true, 00:08:20.702 "data_offset": 2048, 00:08:20.702 "data_size": 63488 00:08:20.702 }, 00:08:20.702 { 00:08:20.702 "name": "BaseBdev2", 00:08:20.702 "uuid": "9ce8923d-857b-4f5c-9144-497fadf68085", 00:08:20.702 "is_configured": true, 00:08:20.702 "data_offset": 2048, 00:08:20.702 "data_size": 63488 00:08:20.702 }, 00:08:20.702 { 00:08:20.702 "name": "BaseBdev3", 00:08:20.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.702 "is_configured": false, 00:08:20.702 "data_offset": 0, 00:08:20.702 "data_size": 0 00:08:20.702 } 00:08:20.702 ] 00:08:20.702 }' 00:08:20.702 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:20.702 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.961 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:20.962 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.962 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.962 [2024-11-19 10:19:34.712177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:20.962 [2024-11-19 10:19:34.712565] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:20.962 [2024-11-19 10:19:34.712635] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:20.962 [2024-11-19 10:19:34.712947] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:20.962 [2024-11-19 10:19:34.713154] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:20.962 [2024-11-19 10:19:34.713203] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:20.962 BaseBdev3 00:08:20.962 [2024-11-19 10:19:34.713395] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.962 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.962 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:20.962 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:20.962 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:20.962 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:20.962 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:20.962 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:20.962 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:20.962 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.962 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.962 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.962 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:20.962 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.962 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.222 [ 00:08:21.222 { 00:08:21.222 "name": "BaseBdev3", 00:08:21.222 "aliases": [ 00:08:21.222 "3f66433a-a2b7-4998-bb87-49e03ac9c29e" 00:08:21.222 ], 00:08:21.222 "product_name": "Malloc disk", 00:08:21.222 "block_size": 512, 00:08:21.222 "num_blocks": 65536, 00:08:21.222 "uuid": "3f66433a-a2b7-4998-bb87-49e03ac9c29e", 00:08:21.222 "assigned_rate_limits": { 00:08:21.222 "rw_ios_per_sec": 0, 00:08:21.222 "rw_mbytes_per_sec": 0, 00:08:21.222 "r_mbytes_per_sec": 0, 00:08:21.222 "w_mbytes_per_sec": 0 00:08:21.222 }, 00:08:21.222 "claimed": true, 00:08:21.222 "claim_type": "exclusive_write", 00:08:21.222 "zoned": false, 00:08:21.222 "supported_io_types": { 00:08:21.222 "read": true, 00:08:21.222 "write": true, 00:08:21.222 "unmap": true, 00:08:21.222 "flush": true, 00:08:21.222 "reset": true, 00:08:21.222 "nvme_admin": false, 00:08:21.222 "nvme_io": false, 00:08:21.222 "nvme_io_md": false, 00:08:21.222 "write_zeroes": true, 00:08:21.222 "zcopy": true, 00:08:21.222 "get_zone_info": false, 00:08:21.222 "zone_management": false, 00:08:21.222 "zone_append": false, 00:08:21.222 "compare": false, 00:08:21.222 "compare_and_write": false, 00:08:21.222 "abort": true, 00:08:21.222 "seek_hole": false, 00:08:21.222 "seek_data": false, 00:08:21.222 "copy": true, 00:08:21.222 "nvme_iov_md": false 00:08:21.222 }, 00:08:21.222 "memory_domains": [ 00:08:21.222 { 00:08:21.222 "dma_device_id": "system", 00:08:21.222 "dma_device_type": 1 00:08:21.222 }, 00:08:21.222 { 00:08:21.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.222 "dma_device_type": 2 00:08:21.222 } 00:08:21.222 ], 00:08:21.222 "driver_specific": {} 00:08:21.222 } 00:08:21.222 ] 00:08:21.222 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.222 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:21.222 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:21.222 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:21.222 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:21.222 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.222 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.222 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.222 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.222 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.222 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.222 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.222 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.222 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.222 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.222 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.222 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.222 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.222 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.222 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.222 "name": "Existed_Raid", 00:08:21.222 "uuid": "e60810ff-d830-434d-995c-df31bd1b37fd", 00:08:21.222 "strip_size_kb": 64, 00:08:21.222 "state": "online", 00:08:21.222 "raid_level": "raid0", 00:08:21.222 "superblock": true, 00:08:21.222 "num_base_bdevs": 3, 00:08:21.222 "num_base_bdevs_discovered": 3, 00:08:21.222 "num_base_bdevs_operational": 3, 00:08:21.222 "base_bdevs_list": [ 00:08:21.222 { 00:08:21.222 "name": "BaseBdev1", 00:08:21.222 "uuid": "61666389-78fe-4b13-89c2-1a291bb59576", 00:08:21.222 "is_configured": true, 00:08:21.222 "data_offset": 2048, 00:08:21.222 "data_size": 63488 00:08:21.222 }, 00:08:21.222 { 00:08:21.222 "name": "BaseBdev2", 00:08:21.222 "uuid": "9ce8923d-857b-4f5c-9144-497fadf68085", 00:08:21.222 "is_configured": true, 00:08:21.222 "data_offset": 2048, 00:08:21.222 "data_size": 63488 00:08:21.222 }, 00:08:21.222 { 00:08:21.222 "name": "BaseBdev3", 00:08:21.222 "uuid": "3f66433a-a2b7-4998-bb87-49e03ac9c29e", 00:08:21.222 "is_configured": true, 00:08:21.222 "data_offset": 2048, 00:08:21.222 "data_size": 63488 00:08:21.222 } 00:08:21.222 ] 00:08:21.222 }' 00:08:21.222 10:19:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.222 10:19:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.483 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:21.483 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:21.483 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:21.483 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:21.483 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:21.483 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:21.483 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:21.483 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:21.483 10:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.483 10:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.483 [2024-11-19 10:19:35.207654] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:21.483 10:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.483 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:21.483 "name": "Existed_Raid", 00:08:21.483 "aliases": [ 00:08:21.483 "e60810ff-d830-434d-995c-df31bd1b37fd" 00:08:21.483 ], 00:08:21.483 "product_name": "Raid Volume", 00:08:21.483 "block_size": 512, 00:08:21.483 "num_blocks": 190464, 00:08:21.483 "uuid": "e60810ff-d830-434d-995c-df31bd1b37fd", 00:08:21.483 "assigned_rate_limits": { 00:08:21.483 "rw_ios_per_sec": 0, 00:08:21.483 "rw_mbytes_per_sec": 0, 00:08:21.483 "r_mbytes_per_sec": 0, 00:08:21.483 "w_mbytes_per_sec": 0 00:08:21.483 }, 00:08:21.483 "claimed": false, 00:08:21.483 "zoned": false, 00:08:21.483 "supported_io_types": { 00:08:21.483 "read": true, 00:08:21.483 "write": true, 00:08:21.483 "unmap": true, 00:08:21.483 "flush": true, 00:08:21.483 "reset": true, 00:08:21.483 "nvme_admin": false, 00:08:21.483 "nvme_io": false, 00:08:21.483 "nvme_io_md": false, 00:08:21.483 "write_zeroes": true, 00:08:21.483 "zcopy": false, 00:08:21.483 "get_zone_info": false, 00:08:21.483 "zone_management": false, 00:08:21.483 "zone_append": false, 00:08:21.483 "compare": false, 00:08:21.483 "compare_and_write": false, 00:08:21.483 "abort": false, 00:08:21.483 "seek_hole": false, 00:08:21.483 "seek_data": false, 00:08:21.483 "copy": false, 00:08:21.483 "nvme_iov_md": false 00:08:21.483 }, 00:08:21.483 "memory_domains": [ 00:08:21.483 { 00:08:21.483 "dma_device_id": "system", 00:08:21.483 "dma_device_type": 1 00:08:21.483 }, 00:08:21.483 { 00:08:21.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.483 "dma_device_type": 2 00:08:21.483 }, 00:08:21.483 { 00:08:21.483 "dma_device_id": "system", 00:08:21.483 "dma_device_type": 1 00:08:21.483 }, 00:08:21.483 { 00:08:21.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.483 "dma_device_type": 2 00:08:21.483 }, 00:08:21.483 { 00:08:21.483 "dma_device_id": "system", 00:08:21.483 "dma_device_type": 1 00:08:21.483 }, 00:08:21.483 { 00:08:21.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.483 "dma_device_type": 2 00:08:21.483 } 00:08:21.483 ], 00:08:21.483 "driver_specific": { 00:08:21.483 "raid": { 00:08:21.483 "uuid": "e60810ff-d830-434d-995c-df31bd1b37fd", 00:08:21.483 "strip_size_kb": 64, 00:08:21.483 "state": "online", 00:08:21.483 "raid_level": "raid0", 00:08:21.483 "superblock": true, 00:08:21.483 "num_base_bdevs": 3, 00:08:21.483 "num_base_bdevs_discovered": 3, 00:08:21.483 "num_base_bdevs_operational": 3, 00:08:21.483 "base_bdevs_list": [ 00:08:21.483 { 00:08:21.483 "name": "BaseBdev1", 00:08:21.483 "uuid": "61666389-78fe-4b13-89c2-1a291bb59576", 00:08:21.483 "is_configured": true, 00:08:21.483 "data_offset": 2048, 00:08:21.483 "data_size": 63488 00:08:21.483 }, 00:08:21.483 { 00:08:21.483 "name": "BaseBdev2", 00:08:21.483 "uuid": "9ce8923d-857b-4f5c-9144-497fadf68085", 00:08:21.483 "is_configured": true, 00:08:21.483 "data_offset": 2048, 00:08:21.483 "data_size": 63488 00:08:21.483 }, 00:08:21.483 { 00:08:21.483 "name": "BaseBdev3", 00:08:21.483 "uuid": "3f66433a-a2b7-4998-bb87-49e03ac9c29e", 00:08:21.483 "is_configured": true, 00:08:21.483 "data_offset": 2048, 00:08:21.483 "data_size": 63488 00:08:21.483 } 00:08:21.483 ] 00:08:21.483 } 00:08:21.483 } 00:08:21.483 }' 00:08:21.483 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:21.867 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:21.867 BaseBdev2 00:08:21.867 BaseBdev3' 00:08:21.867 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.867 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:21.867 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.867 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:21.867 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.867 10:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.867 10:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.867 10:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.867 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.867 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.867 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.867 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:21.867 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.867 10:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.867 10:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.867 10:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.867 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.867 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.867 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.867 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:21.867 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.867 10:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.867 10:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.867 10:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.867 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.867 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.867 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:21.867 10:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.868 10:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.868 [2024-11-19 10:19:35.467031] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:21.868 [2024-11-19 10:19:35.467112] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:21.868 [2024-11-19 10:19:35.467191] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.868 10:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.868 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:21.868 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:21.868 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:21.868 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:21.868 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:21.868 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:21.868 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.868 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:21.868 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.868 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.868 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:21.868 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.868 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.868 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.868 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.868 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.868 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.868 10:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.868 10:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.868 10:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.868 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.868 "name": "Existed_Raid", 00:08:21.868 "uuid": "e60810ff-d830-434d-995c-df31bd1b37fd", 00:08:21.868 "strip_size_kb": 64, 00:08:21.868 "state": "offline", 00:08:21.868 "raid_level": "raid0", 00:08:21.868 "superblock": true, 00:08:21.868 "num_base_bdevs": 3, 00:08:21.868 "num_base_bdevs_discovered": 2, 00:08:21.868 "num_base_bdevs_operational": 2, 00:08:21.868 "base_bdevs_list": [ 00:08:21.868 { 00:08:21.868 "name": null, 00:08:21.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.868 "is_configured": false, 00:08:21.868 "data_offset": 0, 00:08:21.868 "data_size": 63488 00:08:21.868 }, 00:08:21.868 { 00:08:21.868 "name": "BaseBdev2", 00:08:21.868 "uuid": "9ce8923d-857b-4f5c-9144-497fadf68085", 00:08:21.868 "is_configured": true, 00:08:21.868 "data_offset": 2048, 00:08:21.868 "data_size": 63488 00:08:21.868 }, 00:08:21.868 { 00:08:21.868 "name": "BaseBdev3", 00:08:21.868 "uuid": "3f66433a-a2b7-4998-bb87-49e03ac9c29e", 00:08:21.868 "is_configured": true, 00:08:21.868 "data_offset": 2048, 00:08:21.868 "data_size": 63488 00:08:21.868 } 00:08:21.868 ] 00:08:21.868 }' 00:08:21.868 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.868 10:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.453 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:22.453 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:22.453 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:22.453 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.453 10:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.453 10:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.453 10:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.453 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:22.454 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:22.454 10:19:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:22.454 10:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.454 10:19:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.454 [2024-11-19 10:19:36.002016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:22.454 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.454 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:22.454 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:22.454 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.454 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:22.454 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.454 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.454 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.454 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:22.454 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:22.454 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:22.454 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.454 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.454 [2024-11-19 10:19:36.153956] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:22.454 [2024-11-19 10:19:36.154076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:22.714 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.714 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:22.714 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:22.714 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.714 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:22.714 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.714 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.714 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.714 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:22.714 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:22.714 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:22.714 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.715 BaseBdev2 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.715 [ 00:08:22.715 { 00:08:22.715 "name": "BaseBdev2", 00:08:22.715 "aliases": [ 00:08:22.715 "237060dd-3f4b-4861-a06e-2d3d4afaa721" 00:08:22.715 ], 00:08:22.715 "product_name": "Malloc disk", 00:08:22.715 "block_size": 512, 00:08:22.715 "num_blocks": 65536, 00:08:22.715 "uuid": "237060dd-3f4b-4861-a06e-2d3d4afaa721", 00:08:22.715 "assigned_rate_limits": { 00:08:22.715 "rw_ios_per_sec": 0, 00:08:22.715 "rw_mbytes_per_sec": 0, 00:08:22.715 "r_mbytes_per_sec": 0, 00:08:22.715 "w_mbytes_per_sec": 0 00:08:22.715 }, 00:08:22.715 "claimed": false, 00:08:22.715 "zoned": false, 00:08:22.715 "supported_io_types": { 00:08:22.715 "read": true, 00:08:22.715 "write": true, 00:08:22.715 "unmap": true, 00:08:22.715 "flush": true, 00:08:22.715 "reset": true, 00:08:22.715 "nvme_admin": false, 00:08:22.715 "nvme_io": false, 00:08:22.715 "nvme_io_md": false, 00:08:22.715 "write_zeroes": true, 00:08:22.715 "zcopy": true, 00:08:22.715 "get_zone_info": false, 00:08:22.715 "zone_management": false, 00:08:22.715 "zone_append": false, 00:08:22.715 "compare": false, 00:08:22.715 "compare_and_write": false, 00:08:22.715 "abort": true, 00:08:22.715 "seek_hole": false, 00:08:22.715 "seek_data": false, 00:08:22.715 "copy": true, 00:08:22.715 "nvme_iov_md": false 00:08:22.715 }, 00:08:22.715 "memory_domains": [ 00:08:22.715 { 00:08:22.715 "dma_device_id": "system", 00:08:22.715 "dma_device_type": 1 00:08:22.715 }, 00:08:22.715 { 00:08:22.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.715 "dma_device_type": 2 00:08:22.715 } 00:08:22.715 ], 00:08:22.715 "driver_specific": {} 00:08:22.715 } 00:08:22.715 ] 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.715 BaseBdev3 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.715 [ 00:08:22.715 { 00:08:22.715 "name": "BaseBdev3", 00:08:22.715 "aliases": [ 00:08:22.715 "c79ccf62-99d3-4080-b94e-6ee3542754ea" 00:08:22.715 ], 00:08:22.715 "product_name": "Malloc disk", 00:08:22.715 "block_size": 512, 00:08:22.715 "num_blocks": 65536, 00:08:22.715 "uuid": "c79ccf62-99d3-4080-b94e-6ee3542754ea", 00:08:22.715 "assigned_rate_limits": { 00:08:22.715 "rw_ios_per_sec": 0, 00:08:22.715 "rw_mbytes_per_sec": 0, 00:08:22.715 "r_mbytes_per_sec": 0, 00:08:22.715 "w_mbytes_per_sec": 0 00:08:22.715 }, 00:08:22.715 "claimed": false, 00:08:22.715 "zoned": false, 00:08:22.715 "supported_io_types": { 00:08:22.715 "read": true, 00:08:22.715 "write": true, 00:08:22.715 "unmap": true, 00:08:22.715 "flush": true, 00:08:22.715 "reset": true, 00:08:22.715 "nvme_admin": false, 00:08:22.715 "nvme_io": false, 00:08:22.715 "nvme_io_md": false, 00:08:22.715 "write_zeroes": true, 00:08:22.715 "zcopy": true, 00:08:22.715 "get_zone_info": false, 00:08:22.715 "zone_management": false, 00:08:22.715 "zone_append": false, 00:08:22.715 "compare": false, 00:08:22.715 "compare_and_write": false, 00:08:22.715 "abort": true, 00:08:22.715 "seek_hole": false, 00:08:22.715 "seek_data": false, 00:08:22.715 "copy": true, 00:08:22.715 "nvme_iov_md": false 00:08:22.715 }, 00:08:22.715 "memory_domains": [ 00:08:22.715 { 00:08:22.715 "dma_device_id": "system", 00:08:22.715 "dma_device_type": 1 00:08:22.715 }, 00:08:22.715 { 00:08:22.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.715 "dma_device_type": 2 00:08:22.715 } 00:08:22.715 ], 00:08:22.715 "driver_specific": {} 00:08:22.715 } 00:08:22.715 ] 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.715 [2024-11-19 10:19:36.462784] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:22.715 [2024-11-19 10:19:36.462891] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:22.715 [2024-11-19 10:19:36.462942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:22.715 [2024-11-19 10:19:36.464685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.715 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.976 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.976 "name": "Existed_Raid", 00:08:22.976 "uuid": "7540edd8-044b-4d65-a122-1fa5ae338d64", 00:08:22.976 "strip_size_kb": 64, 00:08:22.976 "state": "configuring", 00:08:22.976 "raid_level": "raid0", 00:08:22.976 "superblock": true, 00:08:22.976 "num_base_bdevs": 3, 00:08:22.976 "num_base_bdevs_discovered": 2, 00:08:22.976 "num_base_bdevs_operational": 3, 00:08:22.976 "base_bdevs_list": [ 00:08:22.976 { 00:08:22.976 "name": "BaseBdev1", 00:08:22.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.976 "is_configured": false, 00:08:22.976 "data_offset": 0, 00:08:22.976 "data_size": 0 00:08:22.976 }, 00:08:22.976 { 00:08:22.976 "name": "BaseBdev2", 00:08:22.976 "uuid": "237060dd-3f4b-4861-a06e-2d3d4afaa721", 00:08:22.976 "is_configured": true, 00:08:22.976 "data_offset": 2048, 00:08:22.976 "data_size": 63488 00:08:22.976 }, 00:08:22.976 { 00:08:22.976 "name": "BaseBdev3", 00:08:22.976 "uuid": "c79ccf62-99d3-4080-b94e-6ee3542754ea", 00:08:22.976 "is_configured": true, 00:08:22.976 "data_offset": 2048, 00:08:22.976 "data_size": 63488 00:08:22.976 } 00:08:22.976 ] 00:08:22.976 }' 00:08:22.976 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.976 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.237 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:23.237 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.237 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.237 [2024-11-19 10:19:36.882087] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:23.237 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.237 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:23.237 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.237 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.237 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.237 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.237 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.237 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.237 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.237 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.237 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.237 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.237 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.237 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.237 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.237 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.237 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.237 "name": "Existed_Raid", 00:08:23.237 "uuid": "7540edd8-044b-4d65-a122-1fa5ae338d64", 00:08:23.237 "strip_size_kb": 64, 00:08:23.237 "state": "configuring", 00:08:23.237 "raid_level": "raid0", 00:08:23.237 "superblock": true, 00:08:23.237 "num_base_bdevs": 3, 00:08:23.237 "num_base_bdevs_discovered": 1, 00:08:23.237 "num_base_bdevs_operational": 3, 00:08:23.237 "base_bdevs_list": [ 00:08:23.237 { 00:08:23.237 "name": "BaseBdev1", 00:08:23.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.237 "is_configured": false, 00:08:23.237 "data_offset": 0, 00:08:23.237 "data_size": 0 00:08:23.237 }, 00:08:23.237 { 00:08:23.237 "name": null, 00:08:23.237 "uuid": "237060dd-3f4b-4861-a06e-2d3d4afaa721", 00:08:23.237 "is_configured": false, 00:08:23.237 "data_offset": 0, 00:08:23.237 "data_size": 63488 00:08:23.237 }, 00:08:23.237 { 00:08:23.237 "name": "BaseBdev3", 00:08:23.237 "uuid": "c79ccf62-99d3-4080-b94e-6ee3542754ea", 00:08:23.237 "is_configured": true, 00:08:23.237 "data_offset": 2048, 00:08:23.237 "data_size": 63488 00:08:23.237 } 00:08:23.237 ] 00:08:23.237 }' 00:08:23.237 10:19:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.237 10:19:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.496 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:23.496 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.496 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.496 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.756 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.756 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:23.756 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:23.756 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.756 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.756 [2024-11-19 10:19:37.344808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:23.756 BaseBdev1 00:08:23.756 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.756 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:23.756 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:23.756 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:23.756 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:23.756 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:23.756 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:23.756 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:23.756 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.756 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.756 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.756 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:23.756 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.756 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.756 [ 00:08:23.756 { 00:08:23.756 "name": "BaseBdev1", 00:08:23.756 "aliases": [ 00:08:23.757 "81b1e2b6-2bf0-45ed-bf15-2567c50c0809" 00:08:23.757 ], 00:08:23.757 "product_name": "Malloc disk", 00:08:23.757 "block_size": 512, 00:08:23.757 "num_blocks": 65536, 00:08:23.757 "uuid": "81b1e2b6-2bf0-45ed-bf15-2567c50c0809", 00:08:23.757 "assigned_rate_limits": { 00:08:23.757 "rw_ios_per_sec": 0, 00:08:23.757 "rw_mbytes_per_sec": 0, 00:08:23.757 "r_mbytes_per_sec": 0, 00:08:23.757 "w_mbytes_per_sec": 0 00:08:23.757 }, 00:08:23.757 "claimed": true, 00:08:23.757 "claim_type": "exclusive_write", 00:08:23.757 "zoned": false, 00:08:23.757 "supported_io_types": { 00:08:23.757 "read": true, 00:08:23.757 "write": true, 00:08:23.757 "unmap": true, 00:08:23.757 "flush": true, 00:08:23.757 "reset": true, 00:08:23.757 "nvme_admin": false, 00:08:23.757 "nvme_io": false, 00:08:23.757 "nvme_io_md": false, 00:08:23.757 "write_zeroes": true, 00:08:23.757 "zcopy": true, 00:08:23.757 "get_zone_info": false, 00:08:23.757 "zone_management": false, 00:08:23.757 "zone_append": false, 00:08:23.757 "compare": false, 00:08:23.757 "compare_and_write": false, 00:08:23.757 "abort": true, 00:08:23.757 "seek_hole": false, 00:08:23.757 "seek_data": false, 00:08:23.757 "copy": true, 00:08:23.757 "nvme_iov_md": false 00:08:23.757 }, 00:08:23.757 "memory_domains": [ 00:08:23.757 { 00:08:23.757 "dma_device_id": "system", 00:08:23.757 "dma_device_type": 1 00:08:23.757 }, 00:08:23.757 { 00:08:23.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.757 "dma_device_type": 2 00:08:23.757 } 00:08:23.757 ], 00:08:23.757 "driver_specific": {} 00:08:23.757 } 00:08:23.757 ] 00:08:23.757 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.757 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:23.757 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:23.757 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.757 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.757 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:23.757 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.757 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:23.757 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.757 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.757 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.757 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.757 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.757 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.757 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.757 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.757 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.757 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.757 "name": "Existed_Raid", 00:08:23.757 "uuid": "7540edd8-044b-4d65-a122-1fa5ae338d64", 00:08:23.757 "strip_size_kb": 64, 00:08:23.757 "state": "configuring", 00:08:23.757 "raid_level": "raid0", 00:08:23.757 "superblock": true, 00:08:23.757 "num_base_bdevs": 3, 00:08:23.757 "num_base_bdevs_discovered": 2, 00:08:23.757 "num_base_bdevs_operational": 3, 00:08:23.757 "base_bdevs_list": [ 00:08:23.757 { 00:08:23.757 "name": "BaseBdev1", 00:08:23.757 "uuid": "81b1e2b6-2bf0-45ed-bf15-2567c50c0809", 00:08:23.757 "is_configured": true, 00:08:23.757 "data_offset": 2048, 00:08:23.757 "data_size": 63488 00:08:23.757 }, 00:08:23.757 { 00:08:23.757 "name": null, 00:08:23.757 "uuid": "237060dd-3f4b-4861-a06e-2d3d4afaa721", 00:08:23.757 "is_configured": false, 00:08:23.757 "data_offset": 0, 00:08:23.757 "data_size": 63488 00:08:23.757 }, 00:08:23.757 { 00:08:23.757 "name": "BaseBdev3", 00:08:23.757 "uuid": "c79ccf62-99d3-4080-b94e-6ee3542754ea", 00:08:23.757 "is_configured": true, 00:08:23.757 "data_offset": 2048, 00:08:23.757 "data_size": 63488 00:08:23.757 } 00:08:23.757 ] 00:08:23.757 }' 00:08:23.757 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.757 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.327 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.327 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.327 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.327 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:24.327 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.327 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:24.327 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:24.327 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.327 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.328 [2024-11-19 10:19:37.851984] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:24.328 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.328 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:24.328 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.328 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.328 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.328 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.328 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.328 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.328 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.328 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.328 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.328 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.328 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.328 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.328 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.328 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.328 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.328 "name": "Existed_Raid", 00:08:24.328 "uuid": "7540edd8-044b-4d65-a122-1fa5ae338d64", 00:08:24.328 "strip_size_kb": 64, 00:08:24.328 "state": "configuring", 00:08:24.328 "raid_level": "raid0", 00:08:24.328 "superblock": true, 00:08:24.328 "num_base_bdevs": 3, 00:08:24.328 "num_base_bdevs_discovered": 1, 00:08:24.328 "num_base_bdevs_operational": 3, 00:08:24.328 "base_bdevs_list": [ 00:08:24.328 { 00:08:24.328 "name": "BaseBdev1", 00:08:24.328 "uuid": "81b1e2b6-2bf0-45ed-bf15-2567c50c0809", 00:08:24.328 "is_configured": true, 00:08:24.328 "data_offset": 2048, 00:08:24.328 "data_size": 63488 00:08:24.328 }, 00:08:24.328 { 00:08:24.328 "name": null, 00:08:24.328 "uuid": "237060dd-3f4b-4861-a06e-2d3d4afaa721", 00:08:24.328 "is_configured": false, 00:08:24.328 "data_offset": 0, 00:08:24.328 "data_size": 63488 00:08:24.328 }, 00:08:24.328 { 00:08:24.328 "name": null, 00:08:24.328 "uuid": "c79ccf62-99d3-4080-b94e-6ee3542754ea", 00:08:24.328 "is_configured": false, 00:08:24.328 "data_offset": 0, 00:08:24.328 "data_size": 63488 00:08:24.328 } 00:08:24.328 ] 00:08:24.328 }' 00:08:24.328 10:19:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.328 10:19:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.588 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.588 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:24.588 10:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.588 10:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.588 10:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.588 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:24.588 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:24.588 10:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.588 10:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.847 [2024-11-19 10:19:38.371170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:24.847 10:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.847 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:24.847 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.847 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:24.847 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:24.848 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:24.848 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:24.848 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.848 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.848 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.848 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.848 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.848 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.848 10:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.848 10:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.848 10:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.848 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.848 "name": "Existed_Raid", 00:08:24.848 "uuid": "7540edd8-044b-4d65-a122-1fa5ae338d64", 00:08:24.848 "strip_size_kb": 64, 00:08:24.848 "state": "configuring", 00:08:24.848 "raid_level": "raid0", 00:08:24.848 "superblock": true, 00:08:24.848 "num_base_bdevs": 3, 00:08:24.848 "num_base_bdevs_discovered": 2, 00:08:24.848 "num_base_bdevs_operational": 3, 00:08:24.848 "base_bdevs_list": [ 00:08:24.848 { 00:08:24.848 "name": "BaseBdev1", 00:08:24.848 "uuid": "81b1e2b6-2bf0-45ed-bf15-2567c50c0809", 00:08:24.848 "is_configured": true, 00:08:24.848 "data_offset": 2048, 00:08:24.848 "data_size": 63488 00:08:24.848 }, 00:08:24.848 { 00:08:24.848 "name": null, 00:08:24.848 "uuid": "237060dd-3f4b-4861-a06e-2d3d4afaa721", 00:08:24.848 "is_configured": false, 00:08:24.848 "data_offset": 0, 00:08:24.848 "data_size": 63488 00:08:24.848 }, 00:08:24.848 { 00:08:24.848 "name": "BaseBdev3", 00:08:24.848 "uuid": "c79ccf62-99d3-4080-b94e-6ee3542754ea", 00:08:24.848 "is_configured": true, 00:08:24.848 "data_offset": 2048, 00:08:24.848 "data_size": 63488 00:08:24.848 } 00:08:24.848 ] 00:08:24.848 }' 00:08:24.848 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.848 10:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.108 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.108 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:25.108 10:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.108 10:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.108 10:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.108 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:25.108 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:25.108 10:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.108 10:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.108 [2024-11-19 10:19:38.874382] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:25.367 10:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.367 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:25.367 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.367 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.367 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.367 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.367 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.367 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.367 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.367 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.367 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.368 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.368 10:19:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.368 10:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.368 10:19:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.368 10:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.368 10:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.368 "name": "Existed_Raid", 00:08:25.368 "uuid": "7540edd8-044b-4d65-a122-1fa5ae338d64", 00:08:25.368 "strip_size_kb": 64, 00:08:25.368 "state": "configuring", 00:08:25.368 "raid_level": "raid0", 00:08:25.368 "superblock": true, 00:08:25.368 "num_base_bdevs": 3, 00:08:25.368 "num_base_bdevs_discovered": 1, 00:08:25.368 "num_base_bdevs_operational": 3, 00:08:25.368 "base_bdevs_list": [ 00:08:25.368 { 00:08:25.368 "name": null, 00:08:25.368 "uuid": "81b1e2b6-2bf0-45ed-bf15-2567c50c0809", 00:08:25.368 "is_configured": false, 00:08:25.368 "data_offset": 0, 00:08:25.368 "data_size": 63488 00:08:25.368 }, 00:08:25.368 { 00:08:25.368 "name": null, 00:08:25.368 "uuid": "237060dd-3f4b-4861-a06e-2d3d4afaa721", 00:08:25.368 "is_configured": false, 00:08:25.368 "data_offset": 0, 00:08:25.368 "data_size": 63488 00:08:25.368 }, 00:08:25.368 { 00:08:25.368 "name": "BaseBdev3", 00:08:25.368 "uuid": "c79ccf62-99d3-4080-b94e-6ee3542754ea", 00:08:25.368 "is_configured": true, 00:08:25.368 "data_offset": 2048, 00:08:25.368 "data_size": 63488 00:08:25.368 } 00:08:25.368 ] 00:08:25.368 }' 00:08:25.368 10:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.368 10:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.936 10:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:25.936 10:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.936 10:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.936 10:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.936 10:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.936 10:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:25.936 10:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:25.936 10:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.936 10:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.936 [2024-11-19 10:19:39.450521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:25.936 10:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.936 10:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:25.936 10:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:25.936 10:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:25.936 10:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:25.936 10:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:25.936 10:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:25.936 10:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:25.936 10:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:25.936 10:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:25.936 10:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:25.936 10:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.936 10:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.936 10:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.936 10:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.936 10:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.936 10:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:25.936 "name": "Existed_Raid", 00:08:25.936 "uuid": "7540edd8-044b-4d65-a122-1fa5ae338d64", 00:08:25.936 "strip_size_kb": 64, 00:08:25.936 "state": "configuring", 00:08:25.936 "raid_level": "raid0", 00:08:25.936 "superblock": true, 00:08:25.936 "num_base_bdevs": 3, 00:08:25.936 "num_base_bdevs_discovered": 2, 00:08:25.936 "num_base_bdevs_operational": 3, 00:08:25.936 "base_bdevs_list": [ 00:08:25.936 { 00:08:25.936 "name": null, 00:08:25.936 "uuid": "81b1e2b6-2bf0-45ed-bf15-2567c50c0809", 00:08:25.936 "is_configured": false, 00:08:25.936 "data_offset": 0, 00:08:25.936 "data_size": 63488 00:08:25.936 }, 00:08:25.936 { 00:08:25.936 "name": "BaseBdev2", 00:08:25.936 "uuid": "237060dd-3f4b-4861-a06e-2d3d4afaa721", 00:08:25.936 "is_configured": true, 00:08:25.936 "data_offset": 2048, 00:08:25.936 "data_size": 63488 00:08:25.936 }, 00:08:25.936 { 00:08:25.936 "name": "BaseBdev3", 00:08:25.936 "uuid": "c79ccf62-99d3-4080-b94e-6ee3542754ea", 00:08:25.936 "is_configured": true, 00:08:25.936 "data_offset": 2048, 00:08:25.936 "data_size": 63488 00:08:25.936 } 00:08:25.936 ] 00:08:25.936 }' 00:08:25.936 10:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:25.936 10:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.196 10:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.196 10:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:26.196 10:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.196 10:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.196 10:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.196 10:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:26.196 10:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.196 10:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:26.196 10:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.196 10:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.196 10:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.196 10:19:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 81b1e2b6-2bf0-45ed-bf15-2567c50c0809 00:08:26.457 10:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.457 10:19:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.457 [2024-11-19 10:19:40.021316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:26.457 [2024-11-19 10:19:40.021636] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:26.457 [2024-11-19 10:19:40.021690] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:26.457 [2024-11-19 10:19:40.022011] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:26.457 [2024-11-19 10:19:40.022213] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:26.457 [2024-11-19 10:19:40.022256] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raNewBaseBdev 00:08:26.457 id_bdev 0x617000008200 00:08:26.457 [2024-11-19 10:19:40.022458] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.457 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.457 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:26.457 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:26.457 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:26.457 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:26.457 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:26.457 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:26.457 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:26.457 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.457 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.457 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.457 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:26.457 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.457 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.457 [ 00:08:26.457 { 00:08:26.457 "name": "NewBaseBdev", 00:08:26.457 "aliases": [ 00:08:26.457 "81b1e2b6-2bf0-45ed-bf15-2567c50c0809" 00:08:26.457 ], 00:08:26.457 "product_name": "Malloc disk", 00:08:26.457 "block_size": 512, 00:08:26.457 "num_blocks": 65536, 00:08:26.457 "uuid": "81b1e2b6-2bf0-45ed-bf15-2567c50c0809", 00:08:26.457 "assigned_rate_limits": { 00:08:26.457 "rw_ios_per_sec": 0, 00:08:26.457 "rw_mbytes_per_sec": 0, 00:08:26.457 "r_mbytes_per_sec": 0, 00:08:26.457 "w_mbytes_per_sec": 0 00:08:26.457 }, 00:08:26.457 "claimed": true, 00:08:26.457 "claim_type": "exclusive_write", 00:08:26.457 "zoned": false, 00:08:26.457 "supported_io_types": { 00:08:26.457 "read": true, 00:08:26.457 "write": true, 00:08:26.457 "unmap": true, 00:08:26.457 "flush": true, 00:08:26.457 "reset": true, 00:08:26.457 "nvme_admin": false, 00:08:26.457 "nvme_io": false, 00:08:26.457 "nvme_io_md": false, 00:08:26.457 "write_zeroes": true, 00:08:26.457 "zcopy": true, 00:08:26.457 "get_zone_info": false, 00:08:26.457 "zone_management": false, 00:08:26.457 "zone_append": false, 00:08:26.457 "compare": false, 00:08:26.457 "compare_and_write": false, 00:08:26.457 "abort": true, 00:08:26.457 "seek_hole": false, 00:08:26.457 "seek_data": false, 00:08:26.457 "copy": true, 00:08:26.457 "nvme_iov_md": false 00:08:26.457 }, 00:08:26.457 "memory_domains": [ 00:08:26.457 { 00:08:26.457 "dma_device_id": "system", 00:08:26.457 "dma_device_type": 1 00:08:26.457 }, 00:08:26.457 { 00:08:26.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.457 "dma_device_type": 2 00:08:26.458 } 00:08:26.458 ], 00:08:26.458 "driver_specific": {} 00:08:26.458 } 00:08:26.458 ] 00:08:26.458 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.458 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:26.458 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:26.458 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:26.458 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:26.458 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.458 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.458 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.458 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.458 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.458 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.458 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.458 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.458 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.458 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.458 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.458 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.458 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.458 "name": "Existed_Raid", 00:08:26.458 "uuid": "7540edd8-044b-4d65-a122-1fa5ae338d64", 00:08:26.458 "strip_size_kb": 64, 00:08:26.458 "state": "online", 00:08:26.458 "raid_level": "raid0", 00:08:26.458 "superblock": true, 00:08:26.458 "num_base_bdevs": 3, 00:08:26.458 "num_base_bdevs_discovered": 3, 00:08:26.458 "num_base_bdevs_operational": 3, 00:08:26.458 "base_bdevs_list": [ 00:08:26.458 { 00:08:26.458 "name": "NewBaseBdev", 00:08:26.458 "uuid": "81b1e2b6-2bf0-45ed-bf15-2567c50c0809", 00:08:26.458 "is_configured": true, 00:08:26.458 "data_offset": 2048, 00:08:26.458 "data_size": 63488 00:08:26.458 }, 00:08:26.458 { 00:08:26.458 "name": "BaseBdev2", 00:08:26.458 "uuid": "237060dd-3f4b-4861-a06e-2d3d4afaa721", 00:08:26.458 "is_configured": true, 00:08:26.458 "data_offset": 2048, 00:08:26.458 "data_size": 63488 00:08:26.458 }, 00:08:26.458 { 00:08:26.458 "name": "BaseBdev3", 00:08:26.458 "uuid": "c79ccf62-99d3-4080-b94e-6ee3542754ea", 00:08:26.458 "is_configured": true, 00:08:26.458 "data_offset": 2048, 00:08:26.458 "data_size": 63488 00:08:26.458 } 00:08:26.458 ] 00:08:26.458 }' 00:08:26.458 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.458 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.718 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:26.718 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:26.718 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:26.718 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:26.718 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:26.718 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:26.718 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:26.718 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:26.718 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.718 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.718 [2024-11-19 10:19:40.496904] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:26.978 "name": "Existed_Raid", 00:08:26.978 "aliases": [ 00:08:26.978 "7540edd8-044b-4d65-a122-1fa5ae338d64" 00:08:26.978 ], 00:08:26.978 "product_name": "Raid Volume", 00:08:26.978 "block_size": 512, 00:08:26.978 "num_blocks": 190464, 00:08:26.978 "uuid": "7540edd8-044b-4d65-a122-1fa5ae338d64", 00:08:26.978 "assigned_rate_limits": { 00:08:26.978 "rw_ios_per_sec": 0, 00:08:26.978 "rw_mbytes_per_sec": 0, 00:08:26.978 "r_mbytes_per_sec": 0, 00:08:26.978 "w_mbytes_per_sec": 0 00:08:26.978 }, 00:08:26.978 "claimed": false, 00:08:26.978 "zoned": false, 00:08:26.978 "supported_io_types": { 00:08:26.978 "read": true, 00:08:26.978 "write": true, 00:08:26.978 "unmap": true, 00:08:26.978 "flush": true, 00:08:26.978 "reset": true, 00:08:26.978 "nvme_admin": false, 00:08:26.978 "nvme_io": false, 00:08:26.978 "nvme_io_md": false, 00:08:26.978 "write_zeroes": true, 00:08:26.978 "zcopy": false, 00:08:26.978 "get_zone_info": false, 00:08:26.978 "zone_management": false, 00:08:26.978 "zone_append": false, 00:08:26.978 "compare": false, 00:08:26.978 "compare_and_write": false, 00:08:26.978 "abort": false, 00:08:26.978 "seek_hole": false, 00:08:26.978 "seek_data": false, 00:08:26.978 "copy": false, 00:08:26.978 "nvme_iov_md": false 00:08:26.978 }, 00:08:26.978 "memory_domains": [ 00:08:26.978 { 00:08:26.978 "dma_device_id": "system", 00:08:26.978 "dma_device_type": 1 00:08:26.978 }, 00:08:26.978 { 00:08:26.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.978 "dma_device_type": 2 00:08:26.978 }, 00:08:26.978 { 00:08:26.978 "dma_device_id": "system", 00:08:26.978 "dma_device_type": 1 00:08:26.978 }, 00:08:26.978 { 00:08:26.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.978 "dma_device_type": 2 00:08:26.978 }, 00:08:26.978 { 00:08:26.978 "dma_device_id": "system", 00:08:26.978 "dma_device_type": 1 00:08:26.978 }, 00:08:26.978 { 00:08:26.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:26.978 "dma_device_type": 2 00:08:26.978 } 00:08:26.978 ], 00:08:26.978 "driver_specific": { 00:08:26.978 "raid": { 00:08:26.978 "uuid": "7540edd8-044b-4d65-a122-1fa5ae338d64", 00:08:26.978 "strip_size_kb": 64, 00:08:26.978 "state": "online", 00:08:26.978 "raid_level": "raid0", 00:08:26.978 "superblock": true, 00:08:26.978 "num_base_bdevs": 3, 00:08:26.978 "num_base_bdevs_discovered": 3, 00:08:26.978 "num_base_bdevs_operational": 3, 00:08:26.978 "base_bdevs_list": [ 00:08:26.978 { 00:08:26.978 "name": "NewBaseBdev", 00:08:26.978 "uuid": "81b1e2b6-2bf0-45ed-bf15-2567c50c0809", 00:08:26.978 "is_configured": true, 00:08:26.978 "data_offset": 2048, 00:08:26.978 "data_size": 63488 00:08:26.978 }, 00:08:26.978 { 00:08:26.978 "name": "BaseBdev2", 00:08:26.978 "uuid": "237060dd-3f4b-4861-a06e-2d3d4afaa721", 00:08:26.978 "is_configured": true, 00:08:26.978 "data_offset": 2048, 00:08:26.978 "data_size": 63488 00:08:26.978 }, 00:08:26.978 { 00:08:26.978 "name": "BaseBdev3", 00:08:26.978 "uuid": "c79ccf62-99d3-4080-b94e-6ee3542754ea", 00:08:26.978 "is_configured": true, 00:08:26.978 "data_offset": 2048, 00:08:26.978 "data_size": 63488 00:08:26.978 } 00:08:26.978 ] 00:08:26.978 } 00:08:26.978 } 00:08:26.978 }' 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:26.978 BaseBdev2 00:08:26.978 BaseBdev3' 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.978 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.239 [2024-11-19 10:19:40.760124] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:27.239 [2024-11-19 10:19:40.760244] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:27.239 [2024-11-19 10:19:40.760366] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:27.239 [2024-11-19 10:19:40.760473] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:27.239 [2024-11-19 10:19:40.760530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:27.239 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.239 10:19:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64274 00:08:27.239 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64274 ']' 00:08:27.239 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64274 00:08:27.239 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:27.239 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.239 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64274 00:08:27.239 killing process with pid 64274 00:08:27.239 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:27.239 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:27.239 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64274' 00:08:27.239 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64274 00:08:27.239 [2024-11-19 10:19:40.808846] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:27.239 10:19:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64274 00:08:27.499 [2024-11-19 10:19:41.141069] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:28.882 10:19:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:28.882 00:08:28.882 real 0m10.544s 00:08:28.882 user 0m16.761s 00:08:28.882 sys 0m1.815s 00:08:28.882 ************************************ 00:08:28.882 END TEST raid_state_function_test_sb 00:08:28.882 ************************************ 00:08:28.882 10:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.882 10:19:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.882 10:19:42 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:28.882 10:19:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:28.882 10:19:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.882 10:19:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:28.882 ************************************ 00:08:28.882 START TEST raid_superblock_test 00:08:28.882 ************************************ 00:08:28.882 10:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:08:28.882 10:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:28.882 10:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:28.882 10:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:28.882 10:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:28.882 10:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:28.882 10:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:28.882 10:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:28.882 10:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:28.882 10:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:28.882 10:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:28.882 10:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:28.882 10:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:28.882 10:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:28.882 10:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:28.882 10:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:28.882 10:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:28.882 10:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64896 00:08:28.882 10:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:28.882 10:19:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64896 00:08:28.882 10:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 64896 ']' 00:08:28.882 10:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.882 10:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.882 10:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.882 10:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.882 10:19:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.882 [2024-11-19 10:19:42.467430] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:28.882 [2024-11-19 10:19:42.467643] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64896 ] 00:08:28.882 [2024-11-19 10:19:42.641147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.142 [2024-11-19 10:19:42.752470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.401 [2024-11-19 10:19:42.947438] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.401 [2024-11-19 10:19:42.947573] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.662 malloc1 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.662 [2024-11-19 10:19:43.333484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:29.662 [2024-11-19 10:19:43.333618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.662 [2024-11-19 10:19:43.333663] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:29.662 [2024-11-19 10:19:43.333696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.662 [2024-11-19 10:19:43.335800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.662 [2024-11-19 10:19:43.335891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:29.662 pt1 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.662 malloc2 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.662 [2024-11-19 10:19:43.391791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:29.662 [2024-11-19 10:19:43.391845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.662 [2024-11-19 10:19:43.391867] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:29.662 [2024-11-19 10:19:43.391875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.662 [2024-11-19 10:19:43.393868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.662 [2024-11-19 10:19:43.393905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:29.662 pt2 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.662 10:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.923 malloc3 00:08:29.923 10:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.923 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:29.923 10:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.923 10:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.923 [2024-11-19 10:19:43.460954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:29.923 [2024-11-19 10:19:43.461064] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:29.923 [2024-11-19 10:19:43.461120] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:29.923 [2024-11-19 10:19:43.461149] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:29.923 [2024-11-19 10:19:43.463183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:29.923 [2024-11-19 10:19:43.463263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:29.923 pt3 00:08:29.923 10:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.923 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:29.923 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:29.923 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:29.923 10:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.923 10:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.923 [2024-11-19 10:19:43.472982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:29.923 [2024-11-19 10:19:43.474743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:29.923 [2024-11-19 10:19:43.474870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:29.923 [2024-11-19 10:19:43.475052] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:29.923 [2024-11-19 10:19:43.475114] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:29.923 [2024-11-19 10:19:43.475375] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:29.923 [2024-11-19 10:19:43.475582] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:29.923 [2024-11-19 10:19:43.475625] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:29.923 [2024-11-19 10:19:43.475804] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.923 10:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.923 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:29.923 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:29.923 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.923 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:29.923 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:29.923 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:29.923 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.923 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.923 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.923 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.923 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.923 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:29.923 10:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.923 10:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.923 10:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.923 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.923 "name": "raid_bdev1", 00:08:29.923 "uuid": "6d063fc2-906c-422a-bd21-cf2efa50a01d", 00:08:29.923 "strip_size_kb": 64, 00:08:29.923 "state": "online", 00:08:29.923 "raid_level": "raid0", 00:08:29.923 "superblock": true, 00:08:29.923 "num_base_bdevs": 3, 00:08:29.923 "num_base_bdevs_discovered": 3, 00:08:29.923 "num_base_bdevs_operational": 3, 00:08:29.923 "base_bdevs_list": [ 00:08:29.923 { 00:08:29.923 "name": "pt1", 00:08:29.923 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:29.923 "is_configured": true, 00:08:29.923 "data_offset": 2048, 00:08:29.923 "data_size": 63488 00:08:29.923 }, 00:08:29.923 { 00:08:29.923 "name": "pt2", 00:08:29.923 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:29.923 "is_configured": true, 00:08:29.923 "data_offset": 2048, 00:08:29.923 "data_size": 63488 00:08:29.923 }, 00:08:29.923 { 00:08:29.923 "name": "pt3", 00:08:29.923 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:29.923 "is_configured": true, 00:08:29.923 "data_offset": 2048, 00:08:29.923 "data_size": 63488 00:08:29.923 } 00:08:29.923 ] 00:08:29.923 }' 00:08:29.923 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.923 10:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.184 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:30.184 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:30.184 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:30.184 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:30.184 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:30.184 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:30.184 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:30.184 10:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.184 10:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.184 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:30.184 [2024-11-19 10:19:43.920487] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:30.184 10:19:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.184 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:30.184 "name": "raid_bdev1", 00:08:30.184 "aliases": [ 00:08:30.184 "6d063fc2-906c-422a-bd21-cf2efa50a01d" 00:08:30.184 ], 00:08:30.184 "product_name": "Raid Volume", 00:08:30.184 "block_size": 512, 00:08:30.184 "num_blocks": 190464, 00:08:30.184 "uuid": "6d063fc2-906c-422a-bd21-cf2efa50a01d", 00:08:30.184 "assigned_rate_limits": { 00:08:30.184 "rw_ios_per_sec": 0, 00:08:30.184 "rw_mbytes_per_sec": 0, 00:08:30.184 "r_mbytes_per_sec": 0, 00:08:30.184 "w_mbytes_per_sec": 0 00:08:30.184 }, 00:08:30.184 "claimed": false, 00:08:30.184 "zoned": false, 00:08:30.184 "supported_io_types": { 00:08:30.184 "read": true, 00:08:30.184 "write": true, 00:08:30.184 "unmap": true, 00:08:30.184 "flush": true, 00:08:30.184 "reset": true, 00:08:30.184 "nvme_admin": false, 00:08:30.184 "nvme_io": false, 00:08:30.184 "nvme_io_md": false, 00:08:30.184 "write_zeroes": true, 00:08:30.184 "zcopy": false, 00:08:30.184 "get_zone_info": false, 00:08:30.184 "zone_management": false, 00:08:30.184 "zone_append": false, 00:08:30.184 "compare": false, 00:08:30.184 "compare_and_write": false, 00:08:30.184 "abort": false, 00:08:30.184 "seek_hole": false, 00:08:30.184 "seek_data": false, 00:08:30.184 "copy": false, 00:08:30.184 "nvme_iov_md": false 00:08:30.184 }, 00:08:30.184 "memory_domains": [ 00:08:30.184 { 00:08:30.184 "dma_device_id": "system", 00:08:30.184 "dma_device_type": 1 00:08:30.184 }, 00:08:30.184 { 00:08:30.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.184 "dma_device_type": 2 00:08:30.184 }, 00:08:30.184 { 00:08:30.184 "dma_device_id": "system", 00:08:30.184 "dma_device_type": 1 00:08:30.184 }, 00:08:30.184 { 00:08:30.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.184 "dma_device_type": 2 00:08:30.184 }, 00:08:30.184 { 00:08:30.184 "dma_device_id": "system", 00:08:30.184 "dma_device_type": 1 00:08:30.184 }, 00:08:30.184 { 00:08:30.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:30.184 "dma_device_type": 2 00:08:30.184 } 00:08:30.184 ], 00:08:30.184 "driver_specific": { 00:08:30.184 "raid": { 00:08:30.184 "uuid": "6d063fc2-906c-422a-bd21-cf2efa50a01d", 00:08:30.184 "strip_size_kb": 64, 00:08:30.184 "state": "online", 00:08:30.184 "raid_level": "raid0", 00:08:30.184 "superblock": true, 00:08:30.184 "num_base_bdevs": 3, 00:08:30.184 "num_base_bdevs_discovered": 3, 00:08:30.184 "num_base_bdevs_operational": 3, 00:08:30.184 "base_bdevs_list": [ 00:08:30.184 { 00:08:30.184 "name": "pt1", 00:08:30.184 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:30.184 "is_configured": true, 00:08:30.184 "data_offset": 2048, 00:08:30.184 "data_size": 63488 00:08:30.184 }, 00:08:30.184 { 00:08:30.184 "name": "pt2", 00:08:30.184 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:30.184 "is_configured": true, 00:08:30.184 "data_offset": 2048, 00:08:30.184 "data_size": 63488 00:08:30.184 }, 00:08:30.184 { 00:08:30.184 "name": "pt3", 00:08:30.184 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:30.184 "is_configured": true, 00:08:30.184 "data_offset": 2048, 00:08:30.184 "data_size": 63488 00:08:30.184 } 00:08:30.184 ] 00:08:30.184 } 00:08:30.184 } 00:08:30.184 }' 00:08:30.184 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:30.444 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:30.445 pt2 00:08:30.445 pt3' 00:08:30.445 10:19:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.445 [2024-11-19 10:19:44.144150] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6d063fc2-906c-422a-bd21-cf2efa50a01d 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6d063fc2-906c-422a-bd21-cf2efa50a01d ']' 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.445 [2024-11-19 10:19:44.187751] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:30.445 [2024-11-19 10:19:44.187822] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:30.445 [2024-11-19 10:19:44.187901] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.445 [2024-11-19 10:19:44.187960] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:30.445 [2024-11-19 10:19:44.187970] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:30.445 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.706 [2024-11-19 10:19:44.339530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:30.706 [2024-11-19 10:19:44.341347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:30.706 [2024-11-19 10:19:44.341453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:30.706 [2024-11-19 10:19:44.341542] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:30.706 [2024-11-19 10:19:44.341630] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:30.706 [2024-11-19 10:19:44.341654] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:30.706 [2024-11-19 10:19:44.341674] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:30.706 [2024-11-19 10:19:44.341686] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:30.706 request: 00:08:30.706 { 00:08:30.706 "name": "raid_bdev1", 00:08:30.706 "raid_level": "raid0", 00:08:30.706 "base_bdevs": [ 00:08:30.706 "malloc1", 00:08:30.706 "malloc2", 00:08:30.706 "malloc3" 00:08:30.706 ], 00:08:30.706 "strip_size_kb": 64, 00:08:30.706 "superblock": false, 00:08:30.706 "method": "bdev_raid_create", 00:08:30.706 "req_id": 1 00:08:30.706 } 00:08:30.706 Got JSON-RPC error response 00:08:30.706 response: 00:08:30.706 { 00:08:30.706 "code": -17, 00:08:30.706 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:30.706 } 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.706 [2024-11-19 10:19:44.403380] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:30.706 [2024-11-19 10:19:44.403466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.706 [2024-11-19 10:19:44.403500] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:30.706 [2024-11-19 10:19:44.403531] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.706 [2024-11-19 10:19:44.405650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.706 [2024-11-19 10:19:44.405720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:30.706 [2024-11-19 10:19:44.405809] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:30.706 [2024-11-19 10:19:44.405881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:30.706 pt1 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.706 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.707 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.707 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.707 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.707 "name": "raid_bdev1", 00:08:30.707 "uuid": "6d063fc2-906c-422a-bd21-cf2efa50a01d", 00:08:30.707 "strip_size_kb": 64, 00:08:30.707 "state": "configuring", 00:08:30.707 "raid_level": "raid0", 00:08:30.707 "superblock": true, 00:08:30.707 "num_base_bdevs": 3, 00:08:30.707 "num_base_bdevs_discovered": 1, 00:08:30.707 "num_base_bdevs_operational": 3, 00:08:30.707 "base_bdevs_list": [ 00:08:30.707 { 00:08:30.707 "name": "pt1", 00:08:30.707 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:30.707 "is_configured": true, 00:08:30.707 "data_offset": 2048, 00:08:30.707 "data_size": 63488 00:08:30.707 }, 00:08:30.707 { 00:08:30.707 "name": null, 00:08:30.707 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:30.707 "is_configured": false, 00:08:30.707 "data_offset": 2048, 00:08:30.707 "data_size": 63488 00:08:30.707 }, 00:08:30.707 { 00:08:30.707 "name": null, 00:08:30.707 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:30.707 "is_configured": false, 00:08:30.707 "data_offset": 2048, 00:08:30.707 "data_size": 63488 00:08:30.707 } 00:08:30.707 ] 00:08:30.707 }' 00:08:30.707 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.707 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.277 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:31.277 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:31.277 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.277 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.277 [2024-11-19 10:19:44.866632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:31.277 [2024-11-19 10:19:44.866694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.277 [2024-11-19 10:19:44.866716] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:31.277 [2024-11-19 10:19:44.866724] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.277 [2024-11-19 10:19:44.867172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.277 [2024-11-19 10:19:44.867192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:31.277 [2024-11-19 10:19:44.867281] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:31.277 [2024-11-19 10:19:44.867302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:31.277 pt2 00:08:31.277 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.277 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:31.277 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.277 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.277 [2024-11-19 10:19:44.878616] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:31.277 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.277 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:31.277 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:31.277 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:31.277 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.277 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.277 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.277 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.277 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.277 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.277 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.277 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.277 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.277 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.277 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:31.277 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.277 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.277 "name": "raid_bdev1", 00:08:31.277 "uuid": "6d063fc2-906c-422a-bd21-cf2efa50a01d", 00:08:31.277 "strip_size_kb": 64, 00:08:31.277 "state": "configuring", 00:08:31.277 "raid_level": "raid0", 00:08:31.277 "superblock": true, 00:08:31.277 "num_base_bdevs": 3, 00:08:31.277 "num_base_bdevs_discovered": 1, 00:08:31.277 "num_base_bdevs_operational": 3, 00:08:31.277 "base_bdevs_list": [ 00:08:31.277 { 00:08:31.277 "name": "pt1", 00:08:31.277 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:31.277 "is_configured": true, 00:08:31.277 "data_offset": 2048, 00:08:31.277 "data_size": 63488 00:08:31.277 }, 00:08:31.277 { 00:08:31.277 "name": null, 00:08:31.277 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:31.277 "is_configured": false, 00:08:31.277 "data_offset": 0, 00:08:31.277 "data_size": 63488 00:08:31.277 }, 00:08:31.277 { 00:08:31.277 "name": null, 00:08:31.277 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:31.277 "is_configured": false, 00:08:31.277 "data_offset": 2048, 00:08:31.277 "data_size": 63488 00:08:31.277 } 00:08:31.277 ] 00:08:31.277 }' 00:08:31.277 10:19:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.277 10:19:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.538 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:31.538 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:31.798 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:31.798 10:19:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.798 10:19:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.798 [2024-11-19 10:19:45.321935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:31.798 [2024-11-19 10:19:45.322128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.798 [2024-11-19 10:19:45.322191] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:31.798 [2024-11-19 10:19:45.322225] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.798 [2024-11-19 10:19:45.322817] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.798 [2024-11-19 10:19:45.322896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:31.798 [2024-11-19 10:19:45.323040] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:31.798 [2024-11-19 10:19:45.323114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:31.798 pt2 00:08:31.798 10:19:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.798 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:31.798 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:31.798 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:31.798 10:19:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.798 10:19:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.798 [2024-11-19 10:19:45.333896] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:31.798 [2024-11-19 10:19:45.334014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.798 [2024-11-19 10:19:45.334035] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:31.798 [2024-11-19 10:19:45.334046] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.798 [2024-11-19 10:19:45.334532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.798 [2024-11-19 10:19:45.334561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:31.798 [2024-11-19 10:19:45.334651] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:31.798 [2024-11-19 10:19:45.334680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:31.798 [2024-11-19 10:19:45.334820] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:31.798 [2024-11-19 10:19:45.334832] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:31.798 [2024-11-19 10:19:45.335152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:31.798 [2024-11-19 10:19:45.335371] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:31.798 [2024-11-19 10:19:45.335384] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:31.798 [2024-11-19 10:19:45.335545] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.798 pt3 00:08:31.798 10:19:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.798 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:31.798 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:31.798 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:31.798 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:31.798 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.798 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.798 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.798 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.798 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.798 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.798 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.798 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.798 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:31.798 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.798 10:19:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.798 10:19:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.798 10:19:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.798 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.798 "name": "raid_bdev1", 00:08:31.798 "uuid": "6d063fc2-906c-422a-bd21-cf2efa50a01d", 00:08:31.798 "strip_size_kb": 64, 00:08:31.798 "state": "online", 00:08:31.798 "raid_level": "raid0", 00:08:31.798 "superblock": true, 00:08:31.798 "num_base_bdevs": 3, 00:08:31.798 "num_base_bdevs_discovered": 3, 00:08:31.798 "num_base_bdevs_operational": 3, 00:08:31.798 "base_bdevs_list": [ 00:08:31.798 { 00:08:31.798 "name": "pt1", 00:08:31.798 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:31.798 "is_configured": true, 00:08:31.798 "data_offset": 2048, 00:08:31.798 "data_size": 63488 00:08:31.798 }, 00:08:31.798 { 00:08:31.798 "name": "pt2", 00:08:31.798 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:31.798 "is_configured": true, 00:08:31.798 "data_offset": 2048, 00:08:31.798 "data_size": 63488 00:08:31.798 }, 00:08:31.798 { 00:08:31.798 "name": "pt3", 00:08:31.798 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:31.798 "is_configured": true, 00:08:31.798 "data_offset": 2048, 00:08:31.798 "data_size": 63488 00:08:31.798 } 00:08:31.798 ] 00:08:31.798 }' 00:08:31.798 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.799 10:19:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.059 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:32.059 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:32.059 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:32.059 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:32.059 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:32.059 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:32.059 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:32.059 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:32.059 10:19:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.059 10:19:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.059 [2024-11-19 10:19:45.721606] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:32.059 10:19:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.059 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:32.059 "name": "raid_bdev1", 00:08:32.059 "aliases": [ 00:08:32.059 "6d063fc2-906c-422a-bd21-cf2efa50a01d" 00:08:32.059 ], 00:08:32.059 "product_name": "Raid Volume", 00:08:32.059 "block_size": 512, 00:08:32.059 "num_blocks": 190464, 00:08:32.059 "uuid": "6d063fc2-906c-422a-bd21-cf2efa50a01d", 00:08:32.059 "assigned_rate_limits": { 00:08:32.059 "rw_ios_per_sec": 0, 00:08:32.059 "rw_mbytes_per_sec": 0, 00:08:32.059 "r_mbytes_per_sec": 0, 00:08:32.059 "w_mbytes_per_sec": 0 00:08:32.059 }, 00:08:32.059 "claimed": false, 00:08:32.059 "zoned": false, 00:08:32.059 "supported_io_types": { 00:08:32.059 "read": true, 00:08:32.059 "write": true, 00:08:32.059 "unmap": true, 00:08:32.059 "flush": true, 00:08:32.059 "reset": true, 00:08:32.059 "nvme_admin": false, 00:08:32.059 "nvme_io": false, 00:08:32.059 "nvme_io_md": false, 00:08:32.059 "write_zeroes": true, 00:08:32.059 "zcopy": false, 00:08:32.059 "get_zone_info": false, 00:08:32.059 "zone_management": false, 00:08:32.059 "zone_append": false, 00:08:32.059 "compare": false, 00:08:32.059 "compare_and_write": false, 00:08:32.059 "abort": false, 00:08:32.059 "seek_hole": false, 00:08:32.059 "seek_data": false, 00:08:32.059 "copy": false, 00:08:32.059 "nvme_iov_md": false 00:08:32.059 }, 00:08:32.059 "memory_domains": [ 00:08:32.059 { 00:08:32.059 "dma_device_id": "system", 00:08:32.059 "dma_device_type": 1 00:08:32.059 }, 00:08:32.059 { 00:08:32.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.059 "dma_device_type": 2 00:08:32.059 }, 00:08:32.059 { 00:08:32.059 "dma_device_id": "system", 00:08:32.059 "dma_device_type": 1 00:08:32.059 }, 00:08:32.059 { 00:08:32.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.059 "dma_device_type": 2 00:08:32.059 }, 00:08:32.059 { 00:08:32.059 "dma_device_id": "system", 00:08:32.059 "dma_device_type": 1 00:08:32.059 }, 00:08:32.059 { 00:08:32.059 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.059 "dma_device_type": 2 00:08:32.059 } 00:08:32.059 ], 00:08:32.059 "driver_specific": { 00:08:32.059 "raid": { 00:08:32.059 "uuid": "6d063fc2-906c-422a-bd21-cf2efa50a01d", 00:08:32.059 "strip_size_kb": 64, 00:08:32.059 "state": "online", 00:08:32.059 "raid_level": "raid0", 00:08:32.059 "superblock": true, 00:08:32.059 "num_base_bdevs": 3, 00:08:32.059 "num_base_bdevs_discovered": 3, 00:08:32.059 "num_base_bdevs_operational": 3, 00:08:32.059 "base_bdevs_list": [ 00:08:32.059 { 00:08:32.059 "name": "pt1", 00:08:32.059 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:32.059 "is_configured": true, 00:08:32.059 "data_offset": 2048, 00:08:32.059 "data_size": 63488 00:08:32.059 }, 00:08:32.059 { 00:08:32.059 "name": "pt2", 00:08:32.059 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:32.059 "is_configured": true, 00:08:32.059 "data_offset": 2048, 00:08:32.059 "data_size": 63488 00:08:32.059 }, 00:08:32.059 { 00:08:32.059 "name": "pt3", 00:08:32.059 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:32.059 "is_configured": true, 00:08:32.059 "data_offset": 2048, 00:08:32.059 "data_size": 63488 00:08:32.059 } 00:08:32.059 ] 00:08:32.059 } 00:08:32.059 } 00:08:32.059 }' 00:08:32.059 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:32.059 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:32.059 pt2 00:08:32.059 pt3' 00:08:32.059 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.320 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:32.320 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.320 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.320 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:32.320 10:19:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.320 10:19:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.320 10:19:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.320 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.320 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.320 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.320 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:32.320 10:19:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.320 10:19:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.320 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.320 10:19:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.320 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.320 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.320 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.320 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.320 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:32.320 10:19:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.320 10:19:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.320 10:19:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.320 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.320 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.320 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:32.320 10:19:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.320 10:19:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.320 10:19:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:32.320 [2024-11-19 10:19:45.992971] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:32.320 10:19:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.320 10:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6d063fc2-906c-422a-bd21-cf2efa50a01d '!=' 6d063fc2-906c-422a-bd21-cf2efa50a01d ']' 00:08:32.320 10:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:32.320 10:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:32.320 10:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:32.320 10:19:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64896 00:08:32.320 10:19:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 64896 ']' 00:08:32.320 10:19:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 64896 00:08:32.320 10:19:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:32.320 10:19:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:32.320 10:19:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64896 00:08:32.320 killing process with pid 64896 00:08:32.320 10:19:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:32.320 10:19:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:32.320 10:19:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64896' 00:08:32.320 10:19:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 64896 00:08:32.320 [2024-11-19 10:19:46.078585] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:32.320 [2024-11-19 10:19:46.078702] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:32.320 10:19:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 64896 00:08:32.320 [2024-11-19 10:19:46.078770] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:32.320 [2024-11-19 10:19:46.078784] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:32.890 [2024-11-19 10:19:46.411560] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:33.841 10:19:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:33.841 00:08:33.841 real 0m5.226s 00:08:33.841 user 0m7.425s 00:08:33.841 sys 0m0.849s 00:08:33.841 10:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.841 10:19:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.841 ************************************ 00:08:33.841 END TEST raid_superblock_test 00:08:33.841 ************************************ 00:08:34.101 10:19:47 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:34.101 10:19:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:34.101 10:19:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.101 10:19:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:34.101 ************************************ 00:08:34.101 START TEST raid_read_error_test 00:08:34.101 ************************************ 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.O5CtnIKkzL 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65153 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65153 00:08:34.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65153 ']' 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:34.101 10:19:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.101 [2024-11-19 10:19:47.780712] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:34.101 [2024-11-19 10:19:47.780810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65153 ] 00:08:34.361 [2024-11-19 10:19:47.934122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.361 [2024-11-19 10:19:48.071804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.621 [2024-11-19 10:19:48.311778] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.621 [2024-11-19 10:19:48.311861] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.880 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.880 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:34.880 10:19:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:34.880 10:19:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:34.880 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.880 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.141 BaseBdev1_malloc 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.141 true 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.141 [2024-11-19 10:19:48.694033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:35.141 [2024-11-19 10:19:48.694106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.141 [2024-11-19 10:19:48.694128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:35.141 [2024-11-19 10:19:48.694140] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.141 [2024-11-19 10:19:48.696542] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.141 [2024-11-19 10:19:48.696583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:35.141 BaseBdev1 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.141 BaseBdev2_malloc 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.141 true 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.141 [2024-11-19 10:19:48.768669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:35.141 [2024-11-19 10:19:48.768746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.141 [2024-11-19 10:19:48.768764] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:35.141 [2024-11-19 10:19:48.768777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.141 [2024-11-19 10:19:48.771285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.141 [2024-11-19 10:19:48.771425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:35.141 BaseBdev2 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.141 BaseBdev3_malloc 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.141 true 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.141 [2024-11-19 10:19:48.851638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:35.141 [2024-11-19 10:19:48.851805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.141 [2024-11-19 10:19:48.851833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:35.141 [2024-11-19 10:19:48.851847] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.141 [2024-11-19 10:19:48.854408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.141 [2024-11-19 10:19:48.854449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:35.141 BaseBdev3 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.141 [2024-11-19 10:19:48.863696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:35.141 [2024-11-19 10:19:48.865767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:35.141 [2024-11-19 10:19:48.865853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:35.141 [2024-11-19 10:19:48.866064] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:35.141 [2024-11-19 10:19:48.866080] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:35.141 [2024-11-19 10:19:48.866356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:35.141 [2024-11-19 10:19:48.866526] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:35.141 [2024-11-19 10:19:48.866541] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:35.141 [2024-11-19 10:19:48.866685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.141 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.402 10:19:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.402 "name": "raid_bdev1", 00:08:35.402 "uuid": "f1a8c703-f01d-4def-927d-e9dccc0a9ca9", 00:08:35.402 "strip_size_kb": 64, 00:08:35.402 "state": "online", 00:08:35.402 "raid_level": "raid0", 00:08:35.402 "superblock": true, 00:08:35.402 "num_base_bdevs": 3, 00:08:35.402 "num_base_bdevs_discovered": 3, 00:08:35.402 "num_base_bdevs_operational": 3, 00:08:35.402 "base_bdevs_list": [ 00:08:35.402 { 00:08:35.402 "name": "BaseBdev1", 00:08:35.402 "uuid": "546ed18e-5497-59a6-a2c6-e4882250605c", 00:08:35.402 "is_configured": true, 00:08:35.402 "data_offset": 2048, 00:08:35.402 "data_size": 63488 00:08:35.402 }, 00:08:35.402 { 00:08:35.402 "name": "BaseBdev2", 00:08:35.402 "uuid": "836ee56f-5ad8-5eea-998d-0d42e20f3c59", 00:08:35.402 "is_configured": true, 00:08:35.402 "data_offset": 2048, 00:08:35.402 "data_size": 63488 00:08:35.402 }, 00:08:35.402 { 00:08:35.402 "name": "BaseBdev3", 00:08:35.402 "uuid": "b53383a6-733d-56d8-a99c-15eaaedc0945", 00:08:35.402 "is_configured": true, 00:08:35.402 "data_offset": 2048, 00:08:35.402 "data_size": 63488 00:08:35.402 } 00:08:35.402 ] 00:08:35.402 }' 00:08:35.402 10:19:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.402 10:19:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.667 10:19:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:35.667 10:19:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:35.926 [2024-11-19 10:19:49.452147] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:36.866 10:19:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:36.866 10:19:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.866 10:19:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.866 10:19:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.866 10:19:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:36.866 10:19:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:36.866 10:19:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:36.866 10:19:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:36.866 10:19:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:36.866 10:19:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:36.866 10:19:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.866 10:19:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.866 10:19:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.866 10:19:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.866 10:19:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.866 10:19:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.866 10:19:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.866 10:19:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.866 10:19:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:36.866 10:19:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.866 10:19:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.866 10:19:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.866 10:19:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.866 "name": "raid_bdev1", 00:08:36.866 "uuid": "f1a8c703-f01d-4def-927d-e9dccc0a9ca9", 00:08:36.866 "strip_size_kb": 64, 00:08:36.866 "state": "online", 00:08:36.866 "raid_level": "raid0", 00:08:36.866 "superblock": true, 00:08:36.866 "num_base_bdevs": 3, 00:08:36.866 "num_base_bdevs_discovered": 3, 00:08:36.866 "num_base_bdevs_operational": 3, 00:08:36.866 "base_bdevs_list": [ 00:08:36.866 { 00:08:36.866 "name": "BaseBdev1", 00:08:36.866 "uuid": "546ed18e-5497-59a6-a2c6-e4882250605c", 00:08:36.866 "is_configured": true, 00:08:36.866 "data_offset": 2048, 00:08:36.866 "data_size": 63488 00:08:36.866 }, 00:08:36.866 { 00:08:36.866 "name": "BaseBdev2", 00:08:36.866 "uuid": "836ee56f-5ad8-5eea-998d-0d42e20f3c59", 00:08:36.866 "is_configured": true, 00:08:36.866 "data_offset": 2048, 00:08:36.866 "data_size": 63488 00:08:36.866 }, 00:08:36.866 { 00:08:36.866 "name": "BaseBdev3", 00:08:36.866 "uuid": "b53383a6-733d-56d8-a99c-15eaaedc0945", 00:08:36.866 "is_configured": true, 00:08:36.866 "data_offset": 2048, 00:08:36.866 "data_size": 63488 00:08:36.866 } 00:08:36.866 ] 00:08:36.866 }' 00:08:36.866 10:19:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.866 10:19:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.126 10:19:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:37.126 10:19:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.126 10:19:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.126 [2024-11-19 10:19:50.853401] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:37.126 [2024-11-19 10:19:50.853450] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:37.126 [2024-11-19 10:19:50.856169] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:37.126 [2024-11-19 10:19:50.856221] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.126 [2024-11-19 10:19:50.856265] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:37.127 [2024-11-19 10:19:50.856276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:37.127 { 00:08:37.127 "results": [ 00:08:37.127 { 00:08:37.127 "job": "raid_bdev1", 00:08:37.127 "core_mask": "0x1", 00:08:37.127 "workload": "randrw", 00:08:37.127 "percentage": 50, 00:08:37.127 "status": "finished", 00:08:37.127 "queue_depth": 1, 00:08:37.127 "io_size": 131072, 00:08:37.127 "runtime": 1.401667, 00:08:37.127 "iops": 13826.393858170308, 00:08:37.127 "mibps": 1728.2992322712885, 00:08:37.127 "io_failed": 1, 00:08:37.127 "io_timeout": 0, 00:08:37.127 "avg_latency_us": 101.88080425411013, 00:08:37.127 "min_latency_us": 21.463755458515283, 00:08:37.127 "max_latency_us": 1366.5257641921398 00:08:37.127 } 00:08:37.127 ], 00:08:37.127 "core_count": 1 00:08:37.127 } 00:08:37.127 10:19:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.127 10:19:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65153 00:08:37.127 10:19:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65153 ']' 00:08:37.127 10:19:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65153 00:08:37.127 10:19:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:37.127 10:19:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:37.127 10:19:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65153 00:08:37.127 killing process with pid 65153 00:08:37.127 10:19:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:37.127 10:19:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:37.127 10:19:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65153' 00:08:37.127 10:19:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65153 00:08:37.127 [2024-11-19 10:19:50.901916] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:37.127 10:19:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65153 00:08:37.386 [2024-11-19 10:19:51.157922] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:38.769 10:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:38.769 10:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.O5CtnIKkzL 00:08:38.769 10:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:38.769 10:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:08:38.769 10:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:38.769 10:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:38.769 10:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:38.769 ************************************ 00:08:38.769 END TEST raid_read_error_test 00:08:38.769 ************************************ 00:08:38.769 10:19:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:08:38.769 00:08:38.769 real 0m4.641s 00:08:38.769 user 0m5.491s 00:08:38.769 sys 0m0.649s 00:08:38.769 10:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:38.769 10:19:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.769 10:19:52 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:38.769 10:19:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:38.769 10:19:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:38.769 10:19:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:38.769 ************************************ 00:08:38.769 START TEST raid_write_error_test 00:08:38.769 ************************************ 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.opPHs2zVOC 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65293 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65293 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65293 ']' 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:38.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:38.769 10:19:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.769 [2024-11-19 10:19:52.488012] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:38.769 [2024-11-19 10:19:52.488203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65293 ] 00:08:39.029 [2024-11-19 10:19:52.662939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.029 [2024-11-19 10:19:52.771454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.288 [2024-11-19 10:19:52.963074] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.288 [2024-11-19 10:19:52.963116] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:39.547 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:39.547 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:39.547 10:19:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:39.547 10:19:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:39.547 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.547 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.806 BaseBdev1_malloc 00:08:39.806 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.806 10:19:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:39.806 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.806 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.806 true 00:08:39.806 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.806 10:19:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:39.806 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.806 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.806 [2024-11-19 10:19:53.365813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:39.806 [2024-11-19 10:19:53.365870] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.806 [2024-11-19 10:19:53.365889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:39.806 [2024-11-19 10:19:53.365899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.806 [2024-11-19 10:19:53.367958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.806 [2024-11-19 10:19:53.368013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:39.806 BaseBdev1 00:08:39.806 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.806 10:19:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:39.806 10:19:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:39.806 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.806 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.806 BaseBdev2_malloc 00:08:39.806 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.806 10:19:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:39.806 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.806 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.806 true 00:08:39.806 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.806 10:19:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:39.806 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.806 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.806 [2024-11-19 10:19:53.429727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:39.806 [2024-11-19 10:19:53.429782] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.806 [2024-11-19 10:19:53.429801] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:39.806 [2024-11-19 10:19:53.429812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.806 [2024-11-19 10:19:53.432138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.806 [2024-11-19 10:19:53.432240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:39.806 BaseBdev2 00:08:39.806 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.806 10:19:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.807 BaseBdev3_malloc 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.807 true 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.807 [2024-11-19 10:19:53.509417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:39.807 [2024-11-19 10:19:53.509477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:39.807 [2024-11-19 10:19:53.509493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:39.807 [2024-11-19 10:19:53.509503] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:39.807 [2024-11-19 10:19:53.511478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:39.807 [2024-11-19 10:19:53.511512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:39.807 BaseBdev3 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.807 [2024-11-19 10:19:53.521461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:39.807 [2024-11-19 10:19:53.523178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:39.807 [2024-11-19 10:19:53.523259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:39.807 [2024-11-19 10:19:53.523437] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:39.807 [2024-11-19 10:19:53.523458] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:39.807 [2024-11-19 10:19:53.523685] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:39.807 [2024-11-19 10:19:53.523843] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:39.807 [2024-11-19 10:19:53.523862] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:39.807 [2024-11-19 10:19:53.524012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.807 "name": "raid_bdev1", 00:08:39.807 "uuid": "bcf798f9-6c50-4362-b0c0-e991c6cd2bd0", 00:08:39.807 "strip_size_kb": 64, 00:08:39.807 "state": "online", 00:08:39.807 "raid_level": "raid0", 00:08:39.807 "superblock": true, 00:08:39.807 "num_base_bdevs": 3, 00:08:39.807 "num_base_bdevs_discovered": 3, 00:08:39.807 "num_base_bdevs_operational": 3, 00:08:39.807 "base_bdevs_list": [ 00:08:39.807 { 00:08:39.807 "name": "BaseBdev1", 00:08:39.807 "uuid": "17f60989-7939-5d88-86fa-9b18b420bdd6", 00:08:39.807 "is_configured": true, 00:08:39.807 "data_offset": 2048, 00:08:39.807 "data_size": 63488 00:08:39.807 }, 00:08:39.807 { 00:08:39.807 "name": "BaseBdev2", 00:08:39.807 "uuid": "50361ca5-acad-5ff1-beb5-c09bb1aa8ec7", 00:08:39.807 "is_configured": true, 00:08:39.807 "data_offset": 2048, 00:08:39.807 "data_size": 63488 00:08:39.807 }, 00:08:39.807 { 00:08:39.807 "name": "BaseBdev3", 00:08:39.807 "uuid": "f044e5af-3f7d-562c-b01d-b3ad22d2b2f1", 00:08:39.807 "is_configured": true, 00:08:39.807 "data_offset": 2048, 00:08:39.807 "data_size": 63488 00:08:39.807 } 00:08:39.807 ] 00:08:39.807 }' 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.807 10:19:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.393 10:19:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:40.393 10:19:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:40.393 [2024-11-19 10:19:54.057894] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:41.333 10:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:41.333 10:19:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.333 10:19:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.333 10:19:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.333 10:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:41.333 10:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:41.333 10:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:41.333 10:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:41.333 10:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:41.333 10:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:41.333 10:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.333 10:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.333 10:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.333 10:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.333 10:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.333 10:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.333 10:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.333 10:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.333 10:19:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:41.333 10:19:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.333 10:19:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.333 10:19:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.333 10:19:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.333 "name": "raid_bdev1", 00:08:41.333 "uuid": "bcf798f9-6c50-4362-b0c0-e991c6cd2bd0", 00:08:41.333 "strip_size_kb": 64, 00:08:41.333 "state": "online", 00:08:41.333 "raid_level": "raid0", 00:08:41.333 "superblock": true, 00:08:41.333 "num_base_bdevs": 3, 00:08:41.333 "num_base_bdevs_discovered": 3, 00:08:41.333 "num_base_bdevs_operational": 3, 00:08:41.333 "base_bdevs_list": [ 00:08:41.333 { 00:08:41.333 "name": "BaseBdev1", 00:08:41.333 "uuid": "17f60989-7939-5d88-86fa-9b18b420bdd6", 00:08:41.333 "is_configured": true, 00:08:41.333 "data_offset": 2048, 00:08:41.333 "data_size": 63488 00:08:41.333 }, 00:08:41.333 { 00:08:41.333 "name": "BaseBdev2", 00:08:41.333 "uuid": "50361ca5-acad-5ff1-beb5-c09bb1aa8ec7", 00:08:41.333 "is_configured": true, 00:08:41.333 "data_offset": 2048, 00:08:41.333 "data_size": 63488 00:08:41.333 }, 00:08:41.333 { 00:08:41.333 "name": "BaseBdev3", 00:08:41.333 "uuid": "f044e5af-3f7d-562c-b01d-b3ad22d2b2f1", 00:08:41.333 "is_configured": true, 00:08:41.333 "data_offset": 2048, 00:08:41.333 "data_size": 63488 00:08:41.333 } 00:08:41.333 ] 00:08:41.333 }' 00:08:41.333 10:19:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.333 10:19:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.902 10:19:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:41.902 10:19:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.902 10:19:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.902 [2024-11-19 10:19:55.407917] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:41.902 [2024-11-19 10:19:55.407952] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:41.902 [2024-11-19 10:19:55.410429] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:41.902 [2024-11-19 10:19:55.410477] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.902 [2024-11-19 10:19:55.410514] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:41.902 [2024-11-19 10:19:55.410528] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:41.902 { 00:08:41.902 "results": [ 00:08:41.902 { 00:08:41.902 "job": "raid_bdev1", 00:08:41.902 "core_mask": "0x1", 00:08:41.902 "workload": "randrw", 00:08:41.902 "percentage": 50, 00:08:41.902 "status": "finished", 00:08:41.902 "queue_depth": 1, 00:08:41.902 "io_size": 131072, 00:08:41.902 "runtime": 1.350948, 00:08:41.902 "iops": 16632.764547562158, 00:08:41.902 "mibps": 2079.0955684452697, 00:08:41.902 "io_failed": 1, 00:08:41.902 "io_timeout": 0, 00:08:41.902 "avg_latency_us": 83.5645177219197, 00:08:41.902 "min_latency_us": 25.152838427947597, 00:08:41.902 "max_latency_us": 1395.1441048034935 00:08:41.902 } 00:08:41.902 ], 00:08:41.902 "core_count": 1 00:08:41.902 } 00:08:41.902 10:19:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.902 10:19:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65293 00:08:41.902 10:19:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65293 ']' 00:08:41.902 10:19:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65293 00:08:41.902 10:19:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:41.902 10:19:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:41.902 10:19:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65293 00:08:41.902 10:19:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:41.902 10:19:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:41.902 killing process with pid 65293 00:08:41.902 10:19:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65293' 00:08:41.902 10:19:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65293 00:08:41.902 [2024-11-19 10:19:55.453749] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:41.902 10:19:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65293 00:08:41.903 [2024-11-19 10:19:55.672926] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:43.282 10:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:43.282 10:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:43.282 10:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.opPHs2zVOC 00:08:43.282 10:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:43.282 10:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:43.282 10:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:43.282 10:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:43.282 10:19:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:43.282 00:08:43.282 real 0m4.400s 00:08:43.283 user 0m5.211s 00:08:43.283 sys 0m0.549s 00:08:43.283 10:19:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.283 10:19:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.283 ************************************ 00:08:43.283 END TEST raid_write_error_test 00:08:43.283 ************************************ 00:08:43.283 10:19:56 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:43.283 10:19:56 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:43.283 10:19:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:43.283 10:19:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:43.283 10:19:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:43.283 ************************************ 00:08:43.283 START TEST raid_state_function_test 00:08:43.283 ************************************ 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65437 00:08:43.283 Process raid pid: 65437 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65437' 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65437 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65437 ']' 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:43.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:43.283 10:19:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.283 [2024-11-19 10:19:56.949072] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:43.283 [2024-11-19 10:19:56.949209] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:43.543 [2024-11-19 10:19:57.122477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.543 [2024-11-19 10:19:57.233980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.802 [2024-11-19 10:19:57.427438] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:43.802 [2024-11-19 10:19:57.427477] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.063 10:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:44.063 10:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:44.063 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:44.063 10:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.063 10:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.063 [2024-11-19 10:19:57.769935] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:44.063 [2024-11-19 10:19:57.769985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:44.063 [2024-11-19 10:19:57.770006] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:44.063 [2024-11-19 10:19:57.770016] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:44.063 [2024-11-19 10:19:57.770022] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:44.063 [2024-11-19 10:19:57.770030] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:44.063 10:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.063 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:44.063 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.063 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.063 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.063 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.063 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.063 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.063 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.063 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.063 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.063 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.063 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.063 10:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.063 10:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.063 10:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.063 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.063 "name": "Existed_Raid", 00:08:44.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.063 "strip_size_kb": 64, 00:08:44.063 "state": "configuring", 00:08:44.063 "raid_level": "concat", 00:08:44.063 "superblock": false, 00:08:44.063 "num_base_bdevs": 3, 00:08:44.063 "num_base_bdevs_discovered": 0, 00:08:44.063 "num_base_bdevs_operational": 3, 00:08:44.063 "base_bdevs_list": [ 00:08:44.063 { 00:08:44.063 "name": "BaseBdev1", 00:08:44.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.063 "is_configured": false, 00:08:44.063 "data_offset": 0, 00:08:44.063 "data_size": 0 00:08:44.063 }, 00:08:44.063 { 00:08:44.063 "name": "BaseBdev2", 00:08:44.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.063 "is_configured": false, 00:08:44.063 "data_offset": 0, 00:08:44.063 "data_size": 0 00:08:44.063 }, 00:08:44.063 { 00:08:44.063 "name": "BaseBdev3", 00:08:44.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.063 "is_configured": false, 00:08:44.063 "data_offset": 0, 00:08:44.063 "data_size": 0 00:08:44.063 } 00:08:44.063 ] 00:08:44.063 }' 00:08:44.063 10:19:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.063 10:19:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.632 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:44.632 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.632 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.632 [2024-11-19 10:19:58.221123] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:44.632 [2024-11-19 10:19:58.221163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:44.632 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.632 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:44.632 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.632 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.632 [2024-11-19 10:19:58.233104] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:44.632 [2024-11-19 10:19:58.233145] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:44.632 [2024-11-19 10:19:58.233153] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:44.632 [2024-11-19 10:19:58.233162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:44.632 [2024-11-19 10:19:58.233168] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:44.633 [2024-11-19 10:19:58.233176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.633 [2024-11-19 10:19:58.278979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:44.633 BaseBdev1 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.633 [ 00:08:44.633 { 00:08:44.633 "name": "BaseBdev1", 00:08:44.633 "aliases": [ 00:08:44.633 "237f569d-19fc-445c-8193-79d11d4fd311" 00:08:44.633 ], 00:08:44.633 "product_name": "Malloc disk", 00:08:44.633 "block_size": 512, 00:08:44.633 "num_blocks": 65536, 00:08:44.633 "uuid": "237f569d-19fc-445c-8193-79d11d4fd311", 00:08:44.633 "assigned_rate_limits": { 00:08:44.633 "rw_ios_per_sec": 0, 00:08:44.633 "rw_mbytes_per_sec": 0, 00:08:44.633 "r_mbytes_per_sec": 0, 00:08:44.633 "w_mbytes_per_sec": 0 00:08:44.633 }, 00:08:44.633 "claimed": true, 00:08:44.633 "claim_type": "exclusive_write", 00:08:44.633 "zoned": false, 00:08:44.633 "supported_io_types": { 00:08:44.633 "read": true, 00:08:44.633 "write": true, 00:08:44.633 "unmap": true, 00:08:44.633 "flush": true, 00:08:44.633 "reset": true, 00:08:44.633 "nvme_admin": false, 00:08:44.633 "nvme_io": false, 00:08:44.633 "nvme_io_md": false, 00:08:44.633 "write_zeroes": true, 00:08:44.633 "zcopy": true, 00:08:44.633 "get_zone_info": false, 00:08:44.633 "zone_management": false, 00:08:44.633 "zone_append": false, 00:08:44.633 "compare": false, 00:08:44.633 "compare_and_write": false, 00:08:44.633 "abort": true, 00:08:44.633 "seek_hole": false, 00:08:44.633 "seek_data": false, 00:08:44.633 "copy": true, 00:08:44.633 "nvme_iov_md": false 00:08:44.633 }, 00:08:44.633 "memory_domains": [ 00:08:44.633 { 00:08:44.633 "dma_device_id": "system", 00:08:44.633 "dma_device_type": 1 00:08:44.633 }, 00:08:44.633 { 00:08:44.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.633 "dma_device_type": 2 00:08:44.633 } 00:08:44.633 ], 00:08:44.633 "driver_specific": {} 00:08:44.633 } 00:08:44.633 ] 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.633 "name": "Existed_Raid", 00:08:44.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.633 "strip_size_kb": 64, 00:08:44.633 "state": "configuring", 00:08:44.633 "raid_level": "concat", 00:08:44.633 "superblock": false, 00:08:44.633 "num_base_bdevs": 3, 00:08:44.633 "num_base_bdevs_discovered": 1, 00:08:44.633 "num_base_bdevs_operational": 3, 00:08:44.633 "base_bdevs_list": [ 00:08:44.633 { 00:08:44.633 "name": "BaseBdev1", 00:08:44.633 "uuid": "237f569d-19fc-445c-8193-79d11d4fd311", 00:08:44.633 "is_configured": true, 00:08:44.633 "data_offset": 0, 00:08:44.633 "data_size": 65536 00:08:44.633 }, 00:08:44.633 { 00:08:44.633 "name": "BaseBdev2", 00:08:44.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.633 "is_configured": false, 00:08:44.633 "data_offset": 0, 00:08:44.633 "data_size": 0 00:08:44.633 }, 00:08:44.633 { 00:08:44.633 "name": "BaseBdev3", 00:08:44.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.633 "is_configured": false, 00:08:44.633 "data_offset": 0, 00:08:44.633 "data_size": 0 00:08:44.633 } 00:08:44.633 ] 00:08:44.633 }' 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.633 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.203 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:45.203 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.203 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.203 [2024-11-19 10:19:58.714256] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:45.203 [2024-11-19 10:19:58.714306] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:45.203 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.203 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:45.203 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.203 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.203 [2024-11-19 10:19:58.726287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:45.203 [2024-11-19 10:19:58.728051] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:45.203 [2024-11-19 10:19:58.728089] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:45.203 [2024-11-19 10:19:58.728099] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:45.203 [2024-11-19 10:19:58.728108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:45.203 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.203 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:45.203 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:45.203 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:45.203 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.203 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.203 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.203 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.203 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.203 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.203 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.203 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.203 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.203 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.203 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.203 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.203 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.203 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.203 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.203 "name": "Existed_Raid", 00:08:45.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.203 "strip_size_kb": 64, 00:08:45.203 "state": "configuring", 00:08:45.203 "raid_level": "concat", 00:08:45.203 "superblock": false, 00:08:45.203 "num_base_bdevs": 3, 00:08:45.203 "num_base_bdevs_discovered": 1, 00:08:45.203 "num_base_bdevs_operational": 3, 00:08:45.203 "base_bdevs_list": [ 00:08:45.203 { 00:08:45.203 "name": "BaseBdev1", 00:08:45.203 "uuid": "237f569d-19fc-445c-8193-79d11d4fd311", 00:08:45.203 "is_configured": true, 00:08:45.203 "data_offset": 0, 00:08:45.203 "data_size": 65536 00:08:45.203 }, 00:08:45.203 { 00:08:45.203 "name": "BaseBdev2", 00:08:45.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.203 "is_configured": false, 00:08:45.203 "data_offset": 0, 00:08:45.203 "data_size": 0 00:08:45.203 }, 00:08:45.203 { 00:08:45.203 "name": "BaseBdev3", 00:08:45.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.203 "is_configured": false, 00:08:45.203 "data_offset": 0, 00:08:45.203 "data_size": 0 00:08:45.203 } 00:08:45.203 ] 00:08:45.203 }' 00:08:45.203 10:19:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.203 10:19:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.464 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:45.464 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.464 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.464 [2024-11-19 10:19:59.139343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:45.464 BaseBdev2 00:08:45.464 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.464 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:45.464 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:45.464 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:45.464 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:45.464 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:45.464 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:45.464 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:45.464 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.464 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.464 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.464 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:45.464 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.464 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.464 [ 00:08:45.464 { 00:08:45.464 "name": "BaseBdev2", 00:08:45.464 "aliases": [ 00:08:45.464 "0c39fb1d-1927-4599-bd7a-ae2e0307cb79" 00:08:45.464 ], 00:08:45.464 "product_name": "Malloc disk", 00:08:45.464 "block_size": 512, 00:08:45.464 "num_blocks": 65536, 00:08:45.464 "uuid": "0c39fb1d-1927-4599-bd7a-ae2e0307cb79", 00:08:45.464 "assigned_rate_limits": { 00:08:45.464 "rw_ios_per_sec": 0, 00:08:45.464 "rw_mbytes_per_sec": 0, 00:08:45.464 "r_mbytes_per_sec": 0, 00:08:45.464 "w_mbytes_per_sec": 0 00:08:45.464 }, 00:08:45.464 "claimed": true, 00:08:45.464 "claim_type": "exclusive_write", 00:08:45.464 "zoned": false, 00:08:45.464 "supported_io_types": { 00:08:45.464 "read": true, 00:08:45.464 "write": true, 00:08:45.464 "unmap": true, 00:08:45.464 "flush": true, 00:08:45.464 "reset": true, 00:08:45.464 "nvme_admin": false, 00:08:45.464 "nvme_io": false, 00:08:45.464 "nvme_io_md": false, 00:08:45.464 "write_zeroes": true, 00:08:45.464 "zcopy": true, 00:08:45.464 "get_zone_info": false, 00:08:45.464 "zone_management": false, 00:08:45.464 "zone_append": false, 00:08:45.464 "compare": false, 00:08:45.464 "compare_and_write": false, 00:08:45.464 "abort": true, 00:08:45.464 "seek_hole": false, 00:08:45.464 "seek_data": false, 00:08:45.464 "copy": true, 00:08:45.464 "nvme_iov_md": false 00:08:45.464 }, 00:08:45.464 "memory_domains": [ 00:08:45.464 { 00:08:45.464 "dma_device_id": "system", 00:08:45.464 "dma_device_type": 1 00:08:45.464 }, 00:08:45.464 { 00:08:45.464 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.464 "dma_device_type": 2 00:08:45.464 } 00:08:45.464 ], 00:08:45.464 "driver_specific": {} 00:08:45.464 } 00:08:45.464 ] 00:08:45.464 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.464 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:45.465 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:45.465 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:45.465 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:45.465 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.465 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.465 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.465 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.465 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.465 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.465 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.465 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.465 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.465 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.465 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.465 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.465 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.465 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.465 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.465 "name": "Existed_Raid", 00:08:45.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.465 "strip_size_kb": 64, 00:08:45.465 "state": "configuring", 00:08:45.465 "raid_level": "concat", 00:08:45.465 "superblock": false, 00:08:45.465 "num_base_bdevs": 3, 00:08:45.465 "num_base_bdevs_discovered": 2, 00:08:45.465 "num_base_bdevs_operational": 3, 00:08:45.465 "base_bdevs_list": [ 00:08:45.465 { 00:08:45.465 "name": "BaseBdev1", 00:08:45.465 "uuid": "237f569d-19fc-445c-8193-79d11d4fd311", 00:08:45.465 "is_configured": true, 00:08:45.465 "data_offset": 0, 00:08:45.465 "data_size": 65536 00:08:45.465 }, 00:08:45.465 { 00:08:45.465 "name": "BaseBdev2", 00:08:45.465 "uuid": "0c39fb1d-1927-4599-bd7a-ae2e0307cb79", 00:08:45.465 "is_configured": true, 00:08:45.465 "data_offset": 0, 00:08:45.465 "data_size": 65536 00:08:45.465 }, 00:08:45.465 { 00:08:45.465 "name": "BaseBdev3", 00:08:45.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.465 "is_configured": false, 00:08:45.465 "data_offset": 0, 00:08:45.465 "data_size": 0 00:08:45.465 } 00:08:45.465 ] 00:08:45.465 }' 00:08:45.465 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.465 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.034 [2024-11-19 10:19:59.667985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:46.034 [2024-11-19 10:19:59.668049] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:46.034 [2024-11-19 10:19:59.668077] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:46.034 [2024-11-19 10:19:59.668347] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:46.034 [2024-11-19 10:19:59.668544] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:46.034 [2024-11-19 10:19:59.668561] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:46.034 [2024-11-19 10:19:59.668814] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.034 BaseBdev3 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.034 [ 00:08:46.034 { 00:08:46.034 "name": "BaseBdev3", 00:08:46.034 "aliases": [ 00:08:46.034 "b827166a-c99d-4001-8c74-9f7b867ad018" 00:08:46.034 ], 00:08:46.034 "product_name": "Malloc disk", 00:08:46.034 "block_size": 512, 00:08:46.034 "num_blocks": 65536, 00:08:46.034 "uuid": "b827166a-c99d-4001-8c74-9f7b867ad018", 00:08:46.034 "assigned_rate_limits": { 00:08:46.034 "rw_ios_per_sec": 0, 00:08:46.034 "rw_mbytes_per_sec": 0, 00:08:46.034 "r_mbytes_per_sec": 0, 00:08:46.034 "w_mbytes_per_sec": 0 00:08:46.034 }, 00:08:46.034 "claimed": true, 00:08:46.034 "claim_type": "exclusive_write", 00:08:46.034 "zoned": false, 00:08:46.034 "supported_io_types": { 00:08:46.034 "read": true, 00:08:46.034 "write": true, 00:08:46.034 "unmap": true, 00:08:46.034 "flush": true, 00:08:46.034 "reset": true, 00:08:46.034 "nvme_admin": false, 00:08:46.034 "nvme_io": false, 00:08:46.034 "nvme_io_md": false, 00:08:46.034 "write_zeroes": true, 00:08:46.034 "zcopy": true, 00:08:46.034 "get_zone_info": false, 00:08:46.034 "zone_management": false, 00:08:46.034 "zone_append": false, 00:08:46.034 "compare": false, 00:08:46.034 "compare_and_write": false, 00:08:46.034 "abort": true, 00:08:46.034 "seek_hole": false, 00:08:46.034 "seek_data": false, 00:08:46.034 "copy": true, 00:08:46.034 "nvme_iov_md": false 00:08:46.034 }, 00:08:46.034 "memory_domains": [ 00:08:46.034 { 00:08:46.034 "dma_device_id": "system", 00:08:46.034 "dma_device_type": 1 00:08:46.034 }, 00:08:46.034 { 00:08:46.034 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.034 "dma_device_type": 2 00:08:46.034 } 00:08:46.034 ], 00:08:46.034 "driver_specific": {} 00:08:46.034 } 00:08:46.034 ] 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.034 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.034 "name": "Existed_Raid", 00:08:46.034 "uuid": "4eef22ca-1792-473f-a62a-af9571f9c6d0", 00:08:46.034 "strip_size_kb": 64, 00:08:46.034 "state": "online", 00:08:46.034 "raid_level": "concat", 00:08:46.034 "superblock": false, 00:08:46.034 "num_base_bdevs": 3, 00:08:46.034 "num_base_bdevs_discovered": 3, 00:08:46.034 "num_base_bdevs_operational": 3, 00:08:46.034 "base_bdevs_list": [ 00:08:46.034 { 00:08:46.034 "name": "BaseBdev1", 00:08:46.034 "uuid": "237f569d-19fc-445c-8193-79d11d4fd311", 00:08:46.034 "is_configured": true, 00:08:46.034 "data_offset": 0, 00:08:46.034 "data_size": 65536 00:08:46.034 }, 00:08:46.034 { 00:08:46.034 "name": "BaseBdev2", 00:08:46.034 "uuid": "0c39fb1d-1927-4599-bd7a-ae2e0307cb79", 00:08:46.034 "is_configured": true, 00:08:46.034 "data_offset": 0, 00:08:46.034 "data_size": 65536 00:08:46.034 }, 00:08:46.034 { 00:08:46.035 "name": "BaseBdev3", 00:08:46.035 "uuid": "b827166a-c99d-4001-8c74-9f7b867ad018", 00:08:46.035 "is_configured": true, 00:08:46.035 "data_offset": 0, 00:08:46.035 "data_size": 65536 00:08:46.035 } 00:08:46.035 ] 00:08:46.035 }' 00:08:46.035 10:19:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.035 10:19:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.604 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:46.604 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:46.604 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:46.604 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:46.604 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:46.604 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:46.604 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:46.604 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.604 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.604 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:46.604 [2024-11-19 10:20:00.167456] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:46.604 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.604 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:46.604 "name": "Existed_Raid", 00:08:46.604 "aliases": [ 00:08:46.604 "4eef22ca-1792-473f-a62a-af9571f9c6d0" 00:08:46.604 ], 00:08:46.604 "product_name": "Raid Volume", 00:08:46.604 "block_size": 512, 00:08:46.604 "num_blocks": 196608, 00:08:46.604 "uuid": "4eef22ca-1792-473f-a62a-af9571f9c6d0", 00:08:46.604 "assigned_rate_limits": { 00:08:46.604 "rw_ios_per_sec": 0, 00:08:46.604 "rw_mbytes_per_sec": 0, 00:08:46.604 "r_mbytes_per_sec": 0, 00:08:46.604 "w_mbytes_per_sec": 0 00:08:46.604 }, 00:08:46.604 "claimed": false, 00:08:46.604 "zoned": false, 00:08:46.604 "supported_io_types": { 00:08:46.604 "read": true, 00:08:46.604 "write": true, 00:08:46.604 "unmap": true, 00:08:46.604 "flush": true, 00:08:46.604 "reset": true, 00:08:46.604 "nvme_admin": false, 00:08:46.604 "nvme_io": false, 00:08:46.604 "nvme_io_md": false, 00:08:46.604 "write_zeroes": true, 00:08:46.604 "zcopy": false, 00:08:46.604 "get_zone_info": false, 00:08:46.604 "zone_management": false, 00:08:46.604 "zone_append": false, 00:08:46.604 "compare": false, 00:08:46.604 "compare_and_write": false, 00:08:46.604 "abort": false, 00:08:46.604 "seek_hole": false, 00:08:46.605 "seek_data": false, 00:08:46.605 "copy": false, 00:08:46.605 "nvme_iov_md": false 00:08:46.605 }, 00:08:46.605 "memory_domains": [ 00:08:46.605 { 00:08:46.605 "dma_device_id": "system", 00:08:46.605 "dma_device_type": 1 00:08:46.605 }, 00:08:46.605 { 00:08:46.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.605 "dma_device_type": 2 00:08:46.605 }, 00:08:46.605 { 00:08:46.605 "dma_device_id": "system", 00:08:46.605 "dma_device_type": 1 00:08:46.605 }, 00:08:46.605 { 00:08:46.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.605 "dma_device_type": 2 00:08:46.605 }, 00:08:46.605 { 00:08:46.605 "dma_device_id": "system", 00:08:46.605 "dma_device_type": 1 00:08:46.605 }, 00:08:46.605 { 00:08:46.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.605 "dma_device_type": 2 00:08:46.605 } 00:08:46.605 ], 00:08:46.605 "driver_specific": { 00:08:46.605 "raid": { 00:08:46.605 "uuid": "4eef22ca-1792-473f-a62a-af9571f9c6d0", 00:08:46.605 "strip_size_kb": 64, 00:08:46.605 "state": "online", 00:08:46.605 "raid_level": "concat", 00:08:46.605 "superblock": false, 00:08:46.605 "num_base_bdevs": 3, 00:08:46.605 "num_base_bdevs_discovered": 3, 00:08:46.605 "num_base_bdevs_operational": 3, 00:08:46.605 "base_bdevs_list": [ 00:08:46.605 { 00:08:46.605 "name": "BaseBdev1", 00:08:46.605 "uuid": "237f569d-19fc-445c-8193-79d11d4fd311", 00:08:46.605 "is_configured": true, 00:08:46.605 "data_offset": 0, 00:08:46.605 "data_size": 65536 00:08:46.605 }, 00:08:46.605 { 00:08:46.605 "name": "BaseBdev2", 00:08:46.605 "uuid": "0c39fb1d-1927-4599-bd7a-ae2e0307cb79", 00:08:46.605 "is_configured": true, 00:08:46.605 "data_offset": 0, 00:08:46.605 "data_size": 65536 00:08:46.605 }, 00:08:46.605 { 00:08:46.605 "name": "BaseBdev3", 00:08:46.605 "uuid": "b827166a-c99d-4001-8c74-9f7b867ad018", 00:08:46.605 "is_configured": true, 00:08:46.605 "data_offset": 0, 00:08:46.605 "data_size": 65536 00:08:46.605 } 00:08:46.605 ] 00:08:46.605 } 00:08:46.605 } 00:08:46.605 }' 00:08:46.605 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:46.605 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:46.605 BaseBdev2 00:08:46.605 BaseBdev3' 00:08:46.605 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.605 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:46.605 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.605 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:46.605 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.605 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.605 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.605 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.605 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.605 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.605 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.605 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:46.605 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.605 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.605 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.605 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.605 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.605 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.605 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:46.605 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:46.605 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.605 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.605 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:46.864 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.864 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:46.864 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:46.864 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:46.864 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.864 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.865 [2024-11-19 10:20:00.426758] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:46.865 [2024-11-19 10:20:00.426790] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:46.865 [2024-11-19 10:20:00.426839] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:46.865 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.865 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:46.865 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:46.865 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:46.865 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:46.865 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:46.865 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:46.865 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.865 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:46.865 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:46.865 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.865 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:46.865 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.865 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.865 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.865 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.865 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.865 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.865 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.865 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.865 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.865 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.865 "name": "Existed_Raid", 00:08:46.865 "uuid": "4eef22ca-1792-473f-a62a-af9571f9c6d0", 00:08:46.865 "strip_size_kb": 64, 00:08:46.865 "state": "offline", 00:08:46.865 "raid_level": "concat", 00:08:46.865 "superblock": false, 00:08:46.865 "num_base_bdevs": 3, 00:08:46.865 "num_base_bdevs_discovered": 2, 00:08:46.865 "num_base_bdevs_operational": 2, 00:08:46.865 "base_bdevs_list": [ 00:08:46.865 { 00:08:46.865 "name": null, 00:08:46.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.865 "is_configured": false, 00:08:46.865 "data_offset": 0, 00:08:46.865 "data_size": 65536 00:08:46.865 }, 00:08:46.865 { 00:08:46.865 "name": "BaseBdev2", 00:08:46.865 "uuid": "0c39fb1d-1927-4599-bd7a-ae2e0307cb79", 00:08:46.865 "is_configured": true, 00:08:46.865 "data_offset": 0, 00:08:46.865 "data_size": 65536 00:08:46.865 }, 00:08:46.865 { 00:08:46.865 "name": "BaseBdev3", 00:08:46.865 "uuid": "b827166a-c99d-4001-8c74-9f7b867ad018", 00:08:46.865 "is_configured": true, 00:08:46.865 "data_offset": 0, 00:08:46.865 "data_size": 65536 00:08:46.865 } 00:08:46.865 ] 00:08:46.865 }' 00:08:46.865 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.865 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.434 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:47.434 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:47.434 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.434 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.434 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.434 10:20:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:47.434 10:20:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.434 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:47.434 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:47.434 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:47.434 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.434 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.434 [2024-11-19 10:20:01.012849] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:47.434 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.434 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:47.434 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:47.434 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.434 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.434 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:47.434 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.434 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.434 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:47.434 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:47.434 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:47.434 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.434 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.434 [2024-11-19 10:20:01.157723] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:47.434 [2024-11-19 10:20:01.157778] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.695 BaseBdev2 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.695 [ 00:08:47.695 { 00:08:47.695 "name": "BaseBdev2", 00:08:47.695 "aliases": [ 00:08:47.695 "ff4f0d82-ddd6-4d13-a3dc-77addf004423" 00:08:47.695 ], 00:08:47.695 "product_name": "Malloc disk", 00:08:47.695 "block_size": 512, 00:08:47.695 "num_blocks": 65536, 00:08:47.695 "uuid": "ff4f0d82-ddd6-4d13-a3dc-77addf004423", 00:08:47.695 "assigned_rate_limits": { 00:08:47.695 "rw_ios_per_sec": 0, 00:08:47.695 "rw_mbytes_per_sec": 0, 00:08:47.695 "r_mbytes_per_sec": 0, 00:08:47.695 "w_mbytes_per_sec": 0 00:08:47.695 }, 00:08:47.695 "claimed": false, 00:08:47.695 "zoned": false, 00:08:47.695 "supported_io_types": { 00:08:47.695 "read": true, 00:08:47.695 "write": true, 00:08:47.695 "unmap": true, 00:08:47.695 "flush": true, 00:08:47.695 "reset": true, 00:08:47.695 "nvme_admin": false, 00:08:47.695 "nvme_io": false, 00:08:47.695 "nvme_io_md": false, 00:08:47.695 "write_zeroes": true, 00:08:47.695 "zcopy": true, 00:08:47.695 "get_zone_info": false, 00:08:47.695 "zone_management": false, 00:08:47.695 "zone_append": false, 00:08:47.695 "compare": false, 00:08:47.695 "compare_and_write": false, 00:08:47.695 "abort": true, 00:08:47.695 "seek_hole": false, 00:08:47.695 "seek_data": false, 00:08:47.695 "copy": true, 00:08:47.695 "nvme_iov_md": false 00:08:47.695 }, 00:08:47.695 "memory_domains": [ 00:08:47.695 { 00:08:47.695 "dma_device_id": "system", 00:08:47.695 "dma_device_type": 1 00:08:47.695 }, 00:08:47.695 { 00:08:47.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.695 "dma_device_type": 2 00:08:47.695 } 00:08:47.695 ], 00:08:47.695 "driver_specific": {} 00:08:47.695 } 00:08:47.695 ] 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.695 BaseBdev3 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.695 [ 00:08:47.695 { 00:08:47.695 "name": "BaseBdev3", 00:08:47.695 "aliases": [ 00:08:47.695 "1eb8f8b1-2621-401f-9b18-5cb9a7d38634" 00:08:47.695 ], 00:08:47.695 "product_name": "Malloc disk", 00:08:47.695 "block_size": 512, 00:08:47.695 "num_blocks": 65536, 00:08:47.695 "uuid": "1eb8f8b1-2621-401f-9b18-5cb9a7d38634", 00:08:47.695 "assigned_rate_limits": { 00:08:47.695 "rw_ios_per_sec": 0, 00:08:47.695 "rw_mbytes_per_sec": 0, 00:08:47.695 "r_mbytes_per_sec": 0, 00:08:47.695 "w_mbytes_per_sec": 0 00:08:47.695 }, 00:08:47.695 "claimed": false, 00:08:47.695 "zoned": false, 00:08:47.695 "supported_io_types": { 00:08:47.695 "read": true, 00:08:47.695 "write": true, 00:08:47.695 "unmap": true, 00:08:47.695 "flush": true, 00:08:47.695 "reset": true, 00:08:47.695 "nvme_admin": false, 00:08:47.695 "nvme_io": false, 00:08:47.695 "nvme_io_md": false, 00:08:47.695 "write_zeroes": true, 00:08:47.695 "zcopy": true, 00:08:47.695 "get_zone_info": false, 00:08:47.695 "zone_management": false, 00:08:47.695 "zone_append": false, 00:08:47.695 "compare": false, 00:08:47.695 "compare_and_write": false, 00:08:47.695 "abort": true, 00:08:47.695 "seek_hole": false, 00:08:47.695 "seek_data": false, 00:08:47.695 "copy": true, 00:08:47.695 "nvme_iov_md": false 00:08:47.695 }, 00:08:47.695 "memory_domains": [ 00:08:47.695 { 00:08:47.695 "dma_device_id": "system", 00:08:47.695 "dma_device_type": 1 00:08:47.695 }, 00:08:47.695 { 00:08:47.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.695 "dma_device_type": 2 00:08:47.695 } 00:08:47.695 ], 00:08:47.695 "driver_specific": {} 00:08:47.695 } 00:08:47.695 ] 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.695 [2024-11-19 10:20:01.434782] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:47.695 [2024-11-19 10:20:01.434821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:47.695 [2024-11-19 10:20:01.434841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:47.695 [2024-11-19 10:20:01.436521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.695 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.954 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.954 "name": "Existed_Raid", 00:08:47.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.954 "strip_size_kb": 64, 00:08:47.954 "state": "configuring", 00:08:47.954 "raid_level": "concat", 00:08:47.954 "superblock": false, 00:08:47.954 "num_base_bdevs": 3, 00:08:47.954 "num_base_bdevs_discovered": 2, 00:08:47.954 "num_base_bdevs_operational": 3, 00:08:47.954 "base_bdevs_list": [ 00:08:47.954 { 00:08:47.954 "name": "BaseBdev1", 00:08:47.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.954 "is_configured": false, 00:08:47.954 "data_offset": 0, 00:08:47.954 "data_size": 0 00:08:47.954 }, 00:08:47.954 { 00:08:47.954 "name": "BaseBdev2", 00:08:47.954 "uuid": "ff4f0d82-ddd6-4d13-a3dc-77addf004423", 00:08:47.954 "is_configured": true, 00:08:47.954 "data_offset": 0, 00:08:47.954 "data_size": 65536 00:08:47.954 }, 00:08:47.954 { 00:08:47.954 "name": "BaseBdev3", 00:08:47.954 "uuid": "1eb8f8b1-2621-401f-9b18-5cb9a7d38634", 00:08:47.954 "is_configured": true, 00:08:47.954 "data_offset": 0, 00:08:47.954 "data_size": 65536 00:08:47.954 } 00:08:47.954 ] 00:08:47.954 }' 00:08:47.954 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.954 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.214 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:48.214 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.214 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.214 [2024-11-19 10:20:01.890014] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:48.214 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.214 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:48.214 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.214 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.214 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.214 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.214 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.214 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.214 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.214 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.214 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.214 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.215 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.215 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.215 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.215 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.215 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.215 "name": "Existed_Raid", 00:08:48.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.215 "strip_size_kb": 64, 00:08:48.215 "state": "configuring", 00:08:48.215 "raid_level": "concat", 00:08:48.215 "superblock": false, 00:08:48.215 "num_base_bdevs": 3, 00:08:48.215 "num_base_bdevs_discovered": 1, 00:08:48.215 "num_base_bdevs_operational": 3, 00:08:48.215 "base_bdevs_list": [ 00:08:48.215 { 00:08:48.215 "name": "BaseBdev1", 00:08:48.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.215 "is_configured": false, 00:08:48.215 "data_offset": 0, 00:08:48.215 "data_size": 0 00:08:48.215 }, 00:08:48.215 { 00:08:48.215 "name": null, 00:08:48.215 "uuid": "ff4f0d82-ddd6-4d13-a3dc-77addf004423", 00:08:48.215 "is_configured": false, 00:08:48.215 "data_offset": 0, 00:08:48.215 "data_size": 65536 00:08:48.215 }, 00:08:48.215 { 00:08:48.215 "name": "BaseBdev3", 00:08:48.215 "uuid": "1eb8f8b1-2621-401f-9b18-5cb9a7d38634", 00:08:48.215 "is_configured": true, 00:08:48.215 "data_offset": 0, 00:08:48.215 "data_size": 65536 00:08:48.215 } 00:08:48.215 ] 00:08:48.215 }' 00:08:48.215 10:20:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.215 10:20:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.785 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.785 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.786 [2024-11-19 10:20:02.367926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:48.786 BaseBdev1 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.786 [ 00:08:48.786 { 00:08:48.786 "name": "BaseBdev1", 00:08:48.786 "aliases": [ 00:08:48.786 "417a1c61-a85a-42de-abed-d4ddf907a6a2" 00:08:48.786 ], 00:08:48.786 "product_name": "Malloc disk", 00:08:48.786 "block_size": 512, 00:08:48.786 "num_blocks": 65536, 00:08:48.786 "uuid": "417a1c61-a85a-42de-abed-d4ddf907a6a2", 00:08:48.786 "assigned_rate_limits": { 00:08:48.786 "rw_ios_per_sec": 0, 00:08:48.786 "rw_mbytes_per_sec": 0, 00:08:48.786 "r_mbytes_per_sec": 0, 00:08:48.786 "w_mbytes_per_sec": 0 00:08:48.786 }, 00:08:48.786 "claimed": true, 00:08:48.786 "claim_type": "exclusive_write", 00:08:48.786 "zoned": false, 00:08:48.786 "supported_io_types": { 00:08:48.786 "read": true, 00:08:48.786 "write": true, 00:08:48.786 "unmap": true, 00:08:48.786 "flush": true, 00:08:48.786 "reset": true, 00:08:48.786 "nvme_admin": false, 00:08:48.786 "nvme_io": false, 00:08:48.786 "nvme_io_md": false, 00:08:48.786 "write_zeroes": true, 00:08:48.786 "zcopy": true, 00:08:48.786 "get_zone_info": false, 00:08:48.786 "zone_management": false, 00:08:48.786 "zone_append": false, 00:08:48.786 "compare": false, 00:08:48.786 "compare_and_write": false, 00:08:48.786 "abort": true, 00:08:48.786 "seek_hole": false, 00:08:48.786 "seek_data": false, 00:08:48.786 "copy": true, 00:08:48.786 "nvme_iov_md": false 00:08:48.786 }, 00:08:48.786 "memory_domains": [ 00:08:48.786 { 00:08:48.786 "dma_device_id": "system", 00:08:48.786 "dma_device_type": 1 00:08:48.786 }, 00:08:48.786 { 00:08:48.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.786 "dma_device_type": 2 00:08:48.786 } 00:08:48.786 ], 00:08:48.786 "driver_specific": {} 00:08:48.786 } 00:08:48.786 ] 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.786 "name": "Existed_Raid", 00:08:48.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.786 "strip_size_kb": 64, 00:08:48.786 "state": "configuring", 00:08:48.786 "raid_level": "concat", 00:08:48.786 "superblock": false, 00:08:48.786 "num_base_bdevs": 3, 00:08:48.786 "num_base_bdevs_discovered": 2, 00:08:48.786 "num_base_bdevs_operational": 3, 00:08:48.786 "base_bdevs_list": [ 00:08:48.786 { 00:08:48.786 "name": "BaseBdev1", 00:08:48.786 "uuid": "417a1c61-a85a-42de-abed-d4ddf907a6a2", 00:08:48.786 "is_configured": true, 00:08:48.786 "data_offset": 0, 00:08:48.786 "data_size": 65536 00:08:48.786 }, 00:08:48.786 { 00:08:48.786 "name": null, 00:08:48.786 "uuid": "ff4f0d82-ddd6-4d13-a3dc-77addf004423", 00:08:48.786 "is_configured": false, 00:08:48.786 "data_offset": 0, 00:08:48.786 "data_size": 65536 00:08:48.786 }, 00:08:48.786 { 00:08:48.786 "name": "BaseBdev3", 00:08:48.786 "uuid": "1eb8f8b1-2621-401f-9b18-5cb9a7d38634", 00:08:48.786 "is_configured": true, 00:08:48.786 "data_offset": 0, 00:08:48.786 "data_size": 65536 00:08:48.786 } 00:08:48.786 ] 00:08:48.786 }' 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.786 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.355 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:49.355 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.355 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.355 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.355 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.355 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:49.355 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:49.355 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.355 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.355 [2024-11-19 10:20:02.879090] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:49.355 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.355 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:49.355 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.355 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.355 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.355 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.355 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.355 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.355 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.355 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.355 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.355 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.355 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.355 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.355 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.355 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.355 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.355 "name": "Existed_Raid", 00:08:49.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.355 "strip_size_kb": 64, 00:08:49.355 "state": "configuring", 00:08:49.355 "raid_level": "concat", 00:08:49.355 "superblock": false, 00:08:49.355 "num_base_bdevs": 3, 00:08:49.355 "num_base_bdevs_discovered": 1, 00:08:49.355 "num_base_bdevs_operational": 3, 00:08:49.355 "base_bdevs_list": [ 00:08:49.355 { 00:08:49.355 "name": "BaseBdev1", 00:08:49.355 "uuid": "417a1c61-a85a-42de-abed-d4ddf907a6a2", 00:08:49.355 "is_configured": true, 00:08:49.355 "data_offset": 0, 00:08:49.355 "data_size": 65536 00:08:49.355 }, 00:08:49.355 { 00:08:49.355 "name": null, 00:08:49.355 "uuid": "ff4f0d82-ddd6-4d13-a3dc-77addf004423", 00:08:49.355 "is_configured": false, 00:08:49.355 "data_offset": 0, 00:08:49.355 "data_size": 65536 00:08:49.355 }, 00:08:49.355 { 00:08:49.355 "name": null, 00:08:49.355 "uuid": "1eb8f8b1-2621-401f-9b18-5cb9a7d38634", 00:08:49.355 "is_configured": false, 00:08:49.355 "data_offset": 0, 00:08:49.355 "data_size": 65536 00:08:49.355 } 00:08:49.355 ] 00:08:49.355 }' 00:08:49.355 10:20:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.355 10:20:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.655 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:49.655 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.655 10:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.655 10:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.655 10:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.655 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:49.655 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:49.655 10:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.655 10:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.655 [2024-11-19 10:20:03.338349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:49.655 10:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.655 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:49.655 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.655 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.655 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.655 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.655 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.655 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.655 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.655 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.655 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.655 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.655 10:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.655 10:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.655 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.655 10:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.655 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.655 "name": "Existed_Raid", 00:08:49.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.655 "strip_size_kb": 64, 00:08:49.655 "state": "configuring", 00:08:49.655 "raid_level": "concat", 00:08:49.655 "superblock": false, 00:08:49.655 "num_base_bdevs": 3, 00:08:49.655 "num_base_bdevs_discovered": 2, 00:08:49.655 "num_base_bdevs_operational": 3, 00:08:49.655 "base_bdevs_list": [ 00:08:49.655 { 00:08:49.655 "name": "BaseBdev1", 00:08:49.655 "uuid": "417a1c61-a85a-42de-abed-d4ddf907a6a2", 00:08:49.655 "is_configured": true, 00:08:49.655 "data_offset": 0, 00:08:49.655 "data_size": 65536 00:08:49.655 }, 00:08:49.655 { 00:08:49.655 "name": null, 00:08:49.655 "uuid": "ff4f0d82-ddd6-4d13-a3dc-77addf004423", 00:08:49.655 "is_configured": false, 00:08:49.655 "data_offset": 0, 00:08:49.655 "data_size": 65536 00:08:49.655 }, 00:08:49.655 { 00:08:49.655 "name": "BaseBdev3", 00:08:49.655 "uuid": "1eb8f8b1-2621-401f-9b18-5cb9a7d38634", 00:08:49.655 "is_configured": true, 00:08:49.655 "data_offset": 0, 00:08:49.655 "data_size": 65536 00:08:49.655 } 00:08:49.655 ] 00:08:49.655 }' 00:08:49.655 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.655 10:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.226 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.226 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:50.226 10:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.226 10:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.226 10:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.226 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:50.226 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:50.226 10:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.226 10:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.226 [2024-11-19 10:20:03.785604] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:50.226 10:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.226 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:50.226 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.226 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.226 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.226 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.226 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.226 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.226 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.226 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.226 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.226 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.226 10:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.226 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.226 10:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.226 10:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.226 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.226 "name": "Existed_Raid", 00:08:50.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.226 "strip_size_kb": 64, 00:08:50.226 "state": "configuring", 00:08:50.226 "raid_level": "concat", 00:08:50.226 "superblock": false, 00:08:50.226 "num_base_bdevs": 3, 00:08:50.226 "num_base_bdevs_discovered": 1, 00:08:50.226 "num_base_bdevs_operational": 3, 00:08:50.226 "base_bdevs_list": [ 00:08:50.226 { 00:08:50.226 "name": null, 00:08:50.226 "uuid": "417a1c61-a85a-42de-abed-d4ddf907a6a2", 00:08:50.226 "is_configured": false, 00:08:50.226 "data_offset": 0, 00:08:50.226 "data_size": 65536 00:08:50.226 }, 00:08:50.226 { 00:08:50.227 "name": null, 00:08:50.227 "uuid": "ff4f0d82-ddd6-4d13-a3dc-77addf004423", 00:08:50.227 "is_configured": false, 00:08:50.227 "data_offset": 0, 00:08:50.227 "data_size": 65536 00:08:50.227 }, 00:08:50.227 { 00:08:50.227 "name": "BaseBdev3", 00:08:50.227 "uuid": "1eb8f8b1-2621-401f-9b18-5cb9a7d38634", 00:08:50.227 "is_configured": true, 00:08:50.227 "data_offset": 0, 00:08:50.227 "data_size": 65536 00:08:50.227 } 00:08:50.227 ] 00:08:50.227 }' 00:08:50.227 10:20:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.227 10:20:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.797 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.797 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.797 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:50.797 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.797 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.797 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:50.797 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:50.797 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.797 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.797 [2024-11-19 10:20:04.376303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:50.797 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.797 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:50.797 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.797 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.797 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.797 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.797 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.797 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.797 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.797 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.797 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.797 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.797 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.797 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.797 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.797 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.797 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.797 "name": "Existed_Raid", 00:08:50.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.797 "strip_size_kb": 64, 00:08:50.797 "state": "configuring", 00:08:50.797 "raid_level": "concat", 00:08:50.797 "superblock": false, 00:08:50.797 "num_base_bdevs": 3, 00:08:50.797 "num_base_bdevs_discovered": 2, 00:08:50.797 "num_base_bdevs_operational": 3, 00:08:50.797 "base_bdevs_list": [ 00:08:50.797 { 00:08:50.797 "name": null, 00:08:50.797 "uuid": "417a1c61-a85a-42de-abed-d4ddf907a6a2", 00:08:50.797 "is_configured": false, 00:08:50.797 "data_offset": 0, 00:08:50.797 "data_size": 65536 00:08:50.797 }, 00:08:50.797 { 00:08:50.797 "name": "BaseBdev2", 00:08:50.797 "uuid": "ff4f0d82-ddd6-4d13-a3dc-77addf004423", 00:08:50.797 "is_configured": true, 00:08:50.797 "data_offset": 0, 00:08:50.797 "data_size": 65536 00:08:50.797 }, 00:08:50.797 { 00:08:50.797 "name": "BaseBdev3", 00:08:50.797 "uuid": "1eb8f8b1-2621-401f-9b18-5cb9a7d38634", 00:08:50.797 "is_configured": true, 00:08:50.797 "data_offset": 0, 00:08:50.797 "data_size": 65536 00:08:50.797 } 00:08:50.797 ] 00:08:50.797 }' 00:08:50.797 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.797 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.056 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.056 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:51.057 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.057 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.057 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.057 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:51.057 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:51.057 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.057 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.057 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.057 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.317 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 417a1c61-a85a-42de-abed-d4ddf907a6a2 00:08:51.317 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.317 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.317 [2024-11-19 10:20:04.895661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:51.317 [2024-11-19 10:20:04.895717] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:51.317 [2024-11-19 10:20:04.895727] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:51.317 [2024-11-19 10:20:04.895986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:51.317 [2024-11-19 10:20:04.896150] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:51.317 [2024-11-19 10:20:04.896167] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:51.317 [2024-11-19 10:20:04.896402] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.317 NewBaseBdev 00:08:51.317 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.317 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:51.317 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:51.317 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:51.317 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:51.317 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:51.317 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:51.317 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:51.317 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.317 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.317 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.317 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:51.317 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.317 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.317 [ 00:08:51.317 { 00:08:51.317 "name": "NewBaseBdev", 00:08:51.317 "aliases": [ 00:08:51.317 "417a1c61-a85a-42de-abed-d4ddf907a6a2" 00:08:51.317 ], 00:08:51.317 "product_name": "Malloc disk", 00:08:51.317 "block_size": 512, 00:08:51.317 "num_blocks": 65536, 00:08:51.317 "uuid": "417a1c61-a85a-42de-abed-d4ddf907a6a2", 00:08:51.317 "assigned_rate_limits": { 00:08:51.317 "rw_ios_per_sec": 0, 00:08:51.317 "rw_mbytes_per_sec": 0, 00:08:51.317 "r_mbytes_per_sec": 0, 00:08:51.317 "w_mbytes_per_sec": 0 00:08:51.317 }, 00:08:51.317 "claimed": true, 00:08:51.317 "claim_type": "exclusive_write", 00:08:51.317 "zoned": false, 00:08:51.317 "supported_io_types": { 00:08:51.317 "read": true, 00:08:51.317 "write": true, 00:08:51.317 "unmap": true, 00:08:51.317 "flush": true, 00:08:51.317 "reset": true, 00:08:51.317 "nvme_admin": false, 00:08:51.317 "nvme_io": false, 00:08:51.318 "nvme_io_md": false, 00:08:51.318 "write_zeroes": true, 00:08:51.318 "zcopy": true, 00:08:51.318 "get_zone_info": false, 00:08:51.318 "zone_management": false, 00:08:51.318 "zone_append": false, 00:08:51.318 "compare": false, 00:08:51.318 "compare_and_write": false, 00:08:51.318 "abort": true, 00:08:51.318 "seek_hole": false, 00:08:51.318 "seek_data": false, 00:08:51.318 "copy": true, 00:08:51.318 "nvme_iov_md": false 00:08:51.318 }, 00:08:51.318 "memory_domains": [ 00:08:51.318 { 00:08:51.318 "dma_device_id": "system", 00:08:51.318 "dma_device_type": 1 00:08:51.318 }, 00:08:51.318 { 00:08:51.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.318 "dma_device_type": 2 00:08:51.318 } 00:08:51.318 ], 00:08:51.318 "driver_specific": {} 00:08:51.318 } 00:08:51.318 ] 00:08:51.318 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.318 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:51.318 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:51.318 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.318 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.318 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.318 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.318 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.318 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.318 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.318 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.318 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.318 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.318 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.318 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.318 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.318 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.318 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.318 "name": "Existed_Raid", 00:08:51.318 "uuid": "d58782c8-d368-449e-8bd8-c052060e0da1", 00:08:51.318 "strip_size_kb": 64, 00:08:51.318 "state": "online", 00:08:51.318 "raid_level": "concat", 00:08:51.318 "superblock": false, 00:08:51.318 "num_base_bdevs": 3, 00:08:51.318 "num_base_bdevs_discovered": 3, 00:08:51.318 "num_base_bdevs_operational": 3, 00:08:51.318 "base_bdevs_list": [ 00:08:51.318 { 00:08:51.318 "name": "NewBaseBdev", 00:08:51.318 "uuid": "417a1c61-a85a-42de-abed-d4ddf907a6a2", 00:08:51.318 "is_configured": true, 00:08:51.318 "data_offset": 0, 00:08:51.318 "data_size": 65536 00:08:51.318 }, 00:08:51.318 { 00:08:51.318 "name": "BaseBdev2", 00:08:51.318 "uuid": "ff4f0d82-ddd6-4d13-a3dc-77addf004423", 00:08:51.318 "is_configured": true, 00:08:51.318 "data_offset": 0, 00:08:51.318 "data_size": 65536 00:08:51.318 }, 00:08:51.318 { 00:08:51.318 "name": "BaseBdev3", 00:08:51.318 "uuid": "1eb8f8b1-2621-401f-9b18-5cb9a7d38634", 00:08:51.318 "is_configured": true, 00:08:51.318 "data_offset": 0, 00:08:51.318 "data_size": 65536 00:08:51.318 } 00:08:51.318 ] 00:08:51.318 }' 00:08:51.318 10:20:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.318 10:20:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.888 [2024-11-19 10:20:05.387197] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:51.888 "name": "Existed_Raid", 00:08:51.888 "aliases": [ 00:08:51.888 "d58782c8-d368-449e-8bd8-c052060e0da1" 00:08:51.888 ], 00:08:51.888 "product_name": "Raid Volume", 00:08:51.888 "block_size": 512, 00:08:51.888 "num_blocks": 196608, 00:08:51.888 "uuid": "d58782c8-d368-449e-8bd8-c052060e0da1", 00:08:51.888 "assigned_rate_limits": { 00:08:51.888 "rw_ios_per_sec": 0, 00:08:51.888 "rw_mbytes_per_sec": 0, 00:08:51.888 "r_mbytes_per_sec": 0, 00:08:51.888 "w_mbytes_per_sec": 0 00:08:51.888 }, 00:08:51.888 "claimed": false, 00:08:51.888 "zoned": false, 00:08:51.888 "supported_io_types": { 00:08:51.888 "read": true, 00:08:51.888 "write": true, 00:08:51.888 "unmap": true, 00:08:51.888 "flush": true, 00:08:51.888 "reset": true, 00:08:51.888 "nvme_admin": false, 00:08:51.888 "nvme_io": false, 00:08:51.888 "nvme_io_md": false, 00:08:51.888 "write_zeroes": true, 00:08:51.888 "zcopy": false, 00:08:51.888 "get_zone_info": false, 00:08:51.888 "zone_management": false, 00:08:51.888 "zone_append": false, 00:08:51.888 "compare": false, 00:08:51.888 "compare_and_write": false, 00:08:51.888 "abort": false, 00:08:51.888 "seek_hole": false, 00:08:51.888 "seek_data": false, 00:08:51.888 "copy": false, 00:08:51.888 "nvme_iov_md": false 00:08:51.888 }, 00:08:51.888 "memory_domains": [ 00:08:51.888 { 00:08:51.888 "dma_device_id": "system", 00:08:51.888 "dma_device_type": 1 00:08:51.888 }, 00:08:51.888 { 00:08:51.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.888 "dma_device_type": 2 00:08:51.888 }, 00:08:51.888 { 00:08:51.888 "dma_device_id": "system", 00:08:51.888 "dma_device_type": 1 00:08:51.888 }, 00:08:51.888 { 00:08:51.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.888 "dma_device_type": 2 00:08:51.888 }, 00:08:51.888 { 00:08:51.888 "dma_device_id": "system", 00:08:51.888 "dma_device_type": 1 00:08:51.888 }, 00:08:51.888 { 00:08:51.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.888 "dma_device_type": 2 00:08:51.888 } 00:08:51.888 ], 00:08:51.888 "driver_specific": { 00:08:51.888 "raid": { 00:08:51.888 "uuid": "d58782c8-d368-449e-8bd8-c052060e0da1", 00:08:51.888 "strip_size_kb": 64, 00:08:51.888 "state": "online", 00:08:51.888 "raid_level": "concat", 00:08:51.888 "superblock": false, 00:08:51.888 "num_base_bdevs": 3, 00:08:51.888 "num_base_bdevs_discovered": 3, 00:08:51.888 "num_base_bdevs_operational": 3, 00:08:51.888 "base_bdevs_list": [ 00:08:51.888 { 00:08:51.888 "name": "NewBaseBdev", 00:08:51.888 "uuid": "417a1c61-a85a-42de-abed-d4ddf907a6a2", 00:08:51.888 "is_configured": true, 00:08:51.888 "data_offset": 0, 00:08:51.888 "data_size": 65536 00:08:51.888 }, 00:08:51.888 { 00:08:51.888 "name": "BaseBdev2", 00:08:51.888 "uuid": "ff4f0d82-ddd6-4d13-a3dc-77addf004423", 00:08:51.888 "is_configured": true, 00:08:51.888 "data_offset": 0, 00:08:51.888 "data_size": 65536 00:08:51.888 }, 00:08:51.888 { 00:08:51.888 "name": "BaseBdev3", 00:08:51.888 "uuid": "1eb8f8b1-2621-401f-9b18-5cb9a7d38634", 00:08:51.888 "is_configured": true, 00:08:51.888 "data_offset": 0, 00:08:51.888 "data_size": 65536 00:08:51.888 } 00:08:51.888 ] 00:08:51.888 } 00:08:51.888 } 00:08:51.888 }' 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:51.888 BaseBdev2 00:08:51.888 BaseBdev3' 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.888 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:51.889 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:51.889 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:51.889 10:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.889 10:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.889 10:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.889 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:51.889 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:51.889 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:51.889 10:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.889 10:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.889 [2024-11-19 10:20:05.646443] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:51.889 [2024-11-19 10:20:05.646473] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:51.889 [2024-11-19 10:20:05.646543] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:51.889 [2024-11-19 10:20:05.646597] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:51.889 [2024-11-19 10:20:05.646609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:51.889 10:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.889 10:20:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65437 00:08:51.889 10:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65437 ']' 00:08:51.889 10:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65437 00:08:51.889 10:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:51.889 10:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:51.889 10:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65437 00:08:52.149 10:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:52.149 10:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:52.149 killing process with pid 65437 00:08:52.149 10:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65437' 00:08:52.149 10:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65437 00:08:52.149 [2024-11-19 10:20:05.697679] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:52.149 10:20:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65437 00:08:52.408 [2024-11-19 10:20:05.985333] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:53.348 00:08:53.348 real 0m10.176s 00:08:53.348 user 0m16.235s 00:08:53.348 sys 0m1.756s 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.348 ************************************ 00:08:53.348 END TEST raid_state_function_test 00:08:53.348 ************************************ 00:08:53.348 10:20:07 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:53.348 10:20:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:53.348 10:20:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.348 10:20:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:53.348 ************************************ 00:08:53.348 START TEST raid_state_function_test_sb 00:08:53.348 ************************************ 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:53.348 Process raid pid: 66058 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66058 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66058' 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66058 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66058 ']' 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.348 10:20:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.607 [2024-11-19 10:20:07.193099] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:08:53.607 [2024-11-19 10:20:07.193291] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.607 [2024-11-19 10:20:07.368429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.866 [2024-11-19 10:20:07.474934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.125 [2024-11-19 10:20:07.667513] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.125 [2024-11-19 10:20:07.667627] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.383 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.383 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:54.383 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:54.383 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.383 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.383 [2024-11-19 10:20:08.016922] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:54.383 [2024-11-19 10:20:08.017058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:54.383 [2024-11-19 10:20:08.017074] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:54.383 [2024-11-19 10:20:08.017085] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:54.383 [2024-11-19 10:20:08.017091] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:54.383 [2024-11-19 10:20:08.017099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:54.383 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.383 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:54.383 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.383 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.383 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.383 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.383 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.383 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.383 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.383 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.383 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.383 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.383 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.384 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.384 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.384 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.384 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.384 "name": "Existed_Raid", 00:08:54.384 "uuid": "f2d524c0-d048-4d05-aec2-ee331355078c", 00:08:54.384 "strip_size_kb": 64, 00:08:54.384 "state": "configuring", 00:08:54.384 "raid_level": "concat", 00:08:54.384 "superblock": true, 00:08:54.384 "num_base_bdevs": 3, 00:08:54.384 "num_base_bdevs_discovered": 0, 00:08:54.384 "num_base_bdevs_operational": 3, 00:08:54.384 "base_bdevs_list": [ 00:08:54.384 { 00:08:54.384 "name": "BaseBdev1", 00:08:54.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.384 "is_configured": false, 00:08:54.384 "data_offset": 0, 00:08:54.384 "data_size": 0 00:08:54.384 }, 00:08:54.384 { 00:08:54.384 "name": "BaseBdev2", 00:08:54.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.384 "is_configured": false, 00:08:54.384 "data_offset": 0, 00:08:54.384 "data_size": 0 00:08:54.384 }, 00:08:54.384 { 00:08:54.384 "name": "BaseBdev3", 00:08:54.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.384 "is_configured": false, 00:08:54.384 "data_offset": 0, 00:08:54.384 "data_size": 0 00:08:54.384 } 00:08:54.384 ] 00:08:54.384 }' 00:08:54.384 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.384 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.952 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:54.952 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.952 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.952 [2024-11-19 10:20:08.484080] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:54.952 [2024-11-19 10:20:08.484163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:54.952 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.952 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:54.952 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.952 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.952 [2024-11-19 10:20:08.496063] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:54.952 [2024-11-19 10:20:08.496142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:54.952 [2024-11-19 10:20:08.496170] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:54.952 [2024-11-19 10:20:08.496193] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:54.952 [2024-11-19 10:20:08.496211] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:54.952 [2024-11-19 10:20:08.496261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:54.952 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.952 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:54.952 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.952 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.952 [2024-11-19 10:20:08.541369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:54.952 BaseBdev1 00:08:54.952 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.952 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:54.952 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:54.952 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:54.952 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:54.952 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:54.952 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:54.952 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:54.952 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.952 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.952 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.952 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:54.952 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.952 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.952 [ 00:08:54.952 { 00:08:54.952 "name": "BaseBdev1", 00:08:54.952 "aliases": [ 00:08:54.952 "dc22c447-e024-427e-93f9-919e7494b081" 00:08:54.952 ], 00:08:54.952 "product_name": "Malloc disk", 00:08:54.952 "block_size": 512, 00:08:54.952 "num_blocks": 65536, 00:08:54.953 "uuid": "dc22c447-e024-427e-93f9-919e7494b081", 00:08:54.953 "assigned_rate_limits": { 00:08:54.953 "rw_ios_per_sec": 0, 00:08:54.953 "rw_mbytes_per_sec": 0, 00:08:54.953 "r_mbytes_per_sec": 0, 00:08:54.953 "w_mbytes_per_sec": 0 00:08:54.953 }, 00:08:54.953 "claimed": true, 00:08:54.953 "claim_type": "exclusive_write", 00:08:54.953 "zoned": false, 00:08:54.953 "supported_io_types": { 00:08:54.953 "read": true, 00:08:54.953 "write": true, 00:08:54.953 "unmap": true, 00:08:54.953 "flush": true, 00:08:54.953 "reset": true, 00:08:54.953 "nvme_admin": false, 00:08:54.953 "nvme_io": false, 00:08:54.953 "nvme_io_md": false, 00:08:54.953 "write_zeroes": true, 00:08:54.953 "zcopy": true, 00:08:54.953 "get_zone_info": false, 00:08:54.953 "zone_management": false, 00:08:54.953 "zone_append": false, 00:08:54.953 "compare": false, 00:08:54.953 "compare_and_write": false, 00:08:54.953 "abort": true, 00:08:54.953 "seek_hole": false, 00:08:54.953 "seek_data": false, 00:08:54.953 "copy": true, 00:08:54.953 "nvme_iov_md": false 00:08:54.953 }, 00:08:54.953 "memory_domains": [ 00:08:54.953 { 00:08:54.953 "dma_device_id": "system", 00:08:54.953 "dma_device_type": 1 00:08:54.953 }, 00:08:54.953 { 00:08:54.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.953 "dma_device_type": 2 00:08:54.953 } 00:08:54.953 ], 00:08:54.953 "driver_specific": {} 00:08:54.953 } 00:08:54.953 ] 00:08:54.953 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.953 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:54.953 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:54.953 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.953 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:54.953 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:54.953 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.953 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.953 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.953 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.953 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.953 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.953 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.953 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.953 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.953 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.953 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.953 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.953 "name": "Existed_Raid", 00:08:54.953 "uuid": "edfc68ab-1286-4686-9230-ebcd2d22e225", 00:08:54.953 "strip_size_kb": 64, 00:08:54.953 "state": "configuring", 00:08:54.953 "raid_level": "concat", 00:08:54.953 "superblock": true, 00:08:54.953 "num_base_bdevs": 3, 00:08:54.953 "num_base_bdevs_discovered": 1, 00:08:54.953 "num_base_bdevs_operational": 3, 00:08:54.953 "base_bdevs_list": [ 00:08:54.953 { 00:08:54.953 "name": "BaseBdev1", 00:08:54.953 "uuid": "dc22c447-e024-427e-93f9-919e7494b081", 00:08:54.953 "is_configured": true, 00:08:54.953 "data_offset": 2048, 00:08:54.953 "data_size": 63488 00:08:54.953 }, 00:08:54.953 { 00:08:54.953 "name": "BaseBdev2", 00:08:54.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.953 "is_configured": false, 00:08:54.953 "data_offset": 0, 00:08:54.953 "data_size": 0 00:08:54.953 }, 00:08:54.953 { 00:08:54.953 "name": "BaseBdev3", 00:08:54.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.953 "is_configured": false, 00:08:54.953 "data_offset": 0, 00:08:54.953 "data_size": 0 00:08:54.953 } 00:08:54.953 ] 00:08:54.953 }' 00:08:54.953 10:20:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.953 10:20:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.521 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:55.521 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.521 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.521 [2024-11-19 10:20:09.008598] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:55.521 [2024-11-19 10:20:09.008643] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:55.521 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.521 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:55.521 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.521 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.521 [2024-11-19 10:20:09.016639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:55.521 [2024-11-19 10:20:09.018506] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:55.521 [2024-11-19 10:20:09.018580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:55.521 [2024-11-19 10:20:09.018609] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:55.521 [2024-11-19 10:20:09.018631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:55.521 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.521 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:55.521 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:55.521 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:55.521 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.521 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.521 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.521 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.521 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.521 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.521 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.521 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.521 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.521 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.521 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.521 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.521 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:55.521 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.521 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.521 "name": "Existed_Raid", 00:08:55.521 "uuid": "0359d1f6-581b-430a-a738-7948dd37b750", 00:08:55.521 "strip_size_kb": 64, 00:08:55.521 "state": "configuring", 00:08:55.521 "raid_level": "concat", 00:08:55.521 "superblock": true, 00:08:55.521 "num_base_bdevs": 3, 00:08:55.521 "num_base_bdevs_discovered": 1, 00:08:55.521 "num_base_bdevs_operational": 3, 00:08:55.521 "base_bdevs_list": [ 00:08:55.521 { 00:08:55.521 "name": "BaseBdev1", 00:08:55.521 "uuid": "dc22c447-e024-427e-93f9-919e7494b081", 00:08:55.522 "is_configured": true, 00:08:55.522 "data_offset": 2048, 00:08:55.522 "data_size": 63488 00:08:55.522 }, 00:08:55.522 { 00:08:55.522 "name": "BaseBdev2", 00:08:55.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.522 "is_configured": false, 00:08:55.522 "data_offset": 0, 00:08:55.522 "data_size": 0 00:08:55.522 }, 00:08:55.522 { 00:08:55.522 "name": "BaseBdev3", 00:08:55.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.522 "is_configured": false, 00:08:55.522 "data_offset": 0, 00:08:55.522 "data_size": 0 00:08:55.522 } 00:08:55.522 ] 00:08:55.522 }' 00:08:55.522 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.522 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.781 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:55.781 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.781 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.781 [2024-11-19 10:20:09.519461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:55.781 BaseBdev2 00:08:55.781 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.781 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:55.781 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:55.781 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:55.781 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:55.781 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:55.781 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:55.781 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:55.781 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.781 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.781 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.781 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:55.781 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.781 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.782 [ 00:08:55.782 { 00:08:55.782 "name": "BaseBdev2", 00:08:55.782 "aliases": [ 00:08:55.782 "476d7b18-4fe8-45ce-ae8a-1984ea546a20" 00:08:55.782 ], 00:08:55.782 "product_name": "Malloc disk", 00:08:55.782 "block_size": 512, 00:08:55.782 "num_blocks": 65536, 00:08:55.782 "uuid": "476d7b18-4fe8-45ce-ae8a-1984ea546a20", 00:08:55.782 "assigned_rate_limits": { 00:08:55.782 "rw_ios_per_sec": 0, 00:08:55.782 "rw_mbytes_per_sec": 0, 00:08:55.782 "r_mbytes_per_sec": 0, 00:08:55.782 "w_mbytes_per_sec": 0 00:08:55.782 }, 00:08:55.782 "claimed": true, 00:08:55.782 "claim_type": "exclusive_write", 00:08:55.782 "zoned": false, 00:08:55.782 "supported_io_types": { 00:08:55.782 "read": true, 00:08:55.782 "write": true, 00:08:55.782 "unmap": true, 00:08:55.782 "flush": true, 00:08:55.782 "reset": true, 00:08:55.782 "nvme_admin": false, 00:08:55.782 "nvme_io": false, 00:08:55.782 "nvme_io_md": false, 00:08:55.782 "write_zeroes": true, 00:08:55.782 "zcopy": true, 00:08:55.782 "get_zone_info": false, 00:08:55.782 "zone_management": false, 00:08:55.782 "zone_append": false, 00:08:55.782 "compare": false, 00:08:55.782 "compare_and_write": false, 00:08:55.782 "abort": true, 00:08:55.782 "seek_hole": false, 00:08:55.782 "seek_data": false, 00:08:55.782 "copy": true, 00:08:55.782 "nvme_iov_md": false 00:08:55.782 }, 00:08:55.782 "memory_domains": [ 00:08:55.782 { 00:08:55.782 "dma_device_id": "system", 00:08:55.782 "dma_device_type": 1 00:08:55.782 }, 00:08:55.782 { 00:08:55.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.782 "dma_device_type": 2 00:08:55.782 } 00:08:55.782 ], 00:08:55.782 "driver_specific": {} 00:08:55.782 } 00:08:55.782 ] 00:08:55.782 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.782 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:55.782 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:55.782 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:55.782 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:55.782 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:55.782 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:55.782 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.782 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.782 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.782 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.782 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.782 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.782 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.042 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.042 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.042 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.042 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.042 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.042 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.042 "name": "Existed_Raid", 00:08:56.042 "uuid": "0359d1f6-581b-430a-a738-7948dd37b750", 00:08:56.042 "strip_size_kb": 64, 00:08:56.042 "state": "configuring", 00:08:56.042 "raid_level": "concat", 00:08:56.042 "superblock": true, 00:08:56.042 "num_base_bdevs": 3, 00:08:56.042 "num_base_bdevs_discovered": 2, 00:08:56.042 "num_base_bdevs_operational": 3, 00:08:56.042 "base_bdevs_list": [ 00:08:56.042 { 00:08:56.042 "name": "BaseBdev1", 00:08:56.042 "uuid": "dc22c447-e024-427e-93f9-919e7494b081", 00:08:56.042 "is_configured": true, 00:08:56.042 "data_offset": 2048, 00:08:56.042 "data_size": 63488 00:08:56.042 }, 00:08:56.042 { 00:08:56.042 "name": "BaseBdev2", 00:08:56.042 "uuid": "476d7b18-4fe8-45ce-ae8a-1984ea546a20", 00:08:56.042 "is_configured": true, 00:08:56.042 "data_offset": 2048, 00:08:56.042 "data_size": 63488 00:08:56.042 }, 00:08:56.042 { 00:08:56.042 "name": "BaseBdev3", 00:08:56.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:56.042 "is_configured": false, 00:08:56.042 "data_offset": 0, 00:08:56.042 "data_size": 0 00:08:56.042 } 00:08:56.042 ] 00:08:56.042 }' 00:08:56.042 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.042 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.302 10:20:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:56.302 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.302 10:20:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.302 [2024-11-19 10:20:10.042901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:56.302 [2024-11-19 10:20:10.043398] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:56.302 [2024-11-19 10:20:10.043458] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:56.302 [2024-11-19 10:20:10.043744] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:56.302 BaseBdev3 00:08:56.302 [2024-11-19 10:20:10.043950] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:56.302 [2024-11-19 10:20:10.043966] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:56.302 [2024-11-19 10:20:10.044123] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:56.302 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.302 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:56.302 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:56.302 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:56.302 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:56.302 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:56.302 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:56.302 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:56.302 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.302 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.302 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.302 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:56.302 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.302 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.302 [ 00:08:56.302 { 00:08:56.302 "name": "BaseBdev3", 00:08:56.302 "aliases": [ 00:08:56.302 "2bd294df-0d8e-4408-b67f-fc3d70df73e2" 00:08:56.302 ], 00:08:56.302 "product_name": "Malloc disk", 00:08:56.302 "block_size": 512, 00:08:56.302 "num_blocks": 65536, 00:08:56.302 "uuid": "2bd294df-0d8e-4408-b67f-fc3d70df73e2", 00:08:56.302 "assigned_rate_limits": { 00:08:56.302 "rw_ios_per_sec": 0, 00:08:56.302 "rw_mbytes_per_sec": 0, 00:08:56.302 "r_mbytes_per_sec": 0, 00:08:56.302 "w_mbytes_per_sec": 0 00:08:56.302 }, 00:08:56.302 "claimed": true, 00:08:56.302 "claim_type": "exclusive_write", 00:08:56.302 "zoned": false, 00:08:56.302 "supported_io_types": { 00:08:56.302 "read": true, 00:08:56.302 "write": true, 00:08:56.302 "unmap": true, 00:08:56.302 "flush": true, 00:08:56.302 "reset": true, 00:08:56.302 "nvme_admin": false, 00:08:56.302 "nvme_io": false, 00:08:56.302 "nvme_io_md": false, 00:08:56.302 "write_zeroes": true, 00:08:56.302 "zcopy": true, 00:08:56.302 "get_zone_info": false, 00:08:56.302 "zone_management": false, 00:08:56.302 "zone_append": false, 00:08:56.302 "compare": false, 00:08:56.302 "compare_and_write": false, 00:08:56.302 "abort": true, 00:08:56.302 "seek_hole": false, 00:08:56.302 "seek_data": false, 00:08:56.302 "copy": true, 00:08:56.302 "nvme_iov_md": false 00:08:56.302 }, 00:08:56.302 "memory_domains": [ 00:08:56.302 { 00:08:56.302 "dma_device_id": "system", 00:08:56.302 "dma_device_type": 1 00:08:56.302 }, 00:08:56.302 { 00:08:56.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.302 "dma_device_type": 2 00:08:56.302 } 00:08:56.302 ], 00:08:56.302 "driver_specific": {} 00:08:56.302 } 00:08:56.302 ] 00:08:56.302 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.302 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:56.302 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:56.302 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:56.302 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:56.302 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:56.302 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:56.302 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.302 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.302 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.302 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.302 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.302 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.302 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.561 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.561 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:56.561 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.561 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.561 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.561 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.561 "name": "Existed_Raid", 00:08:56.561 "uuid": "0359d1f6-581b-430a-a738-7948dd37b750", 00:08:56.561 "strip_size_kb": 64, 00:08:56.561 "state": "online", 00:08:56.561 "raid_level": "concat", 00:08:56.561 "superblock": true, 00:08:56.561 "num_base_bdevs": 3, 00:08:56.561 "num_base_bdevs_discovered": 3, 00:08:56.561 "num_base_bdevs_operational": 3, 00:08:56.561 "base_bdevs_list": [ 00:08:56.561 { 00:08:56.561 "name": "BaseBdev1", 00:08:56.561 "uuid": "dc22c447-e024-427e-93f9-919e7494b081", 00:08:56.561 "is_configured": true, 00:08:56.561 "data_offset": 2048, 00:08:56.561 "data_size": 63488 00:08:56.561 }, 00:08:56.561 { 00:08:56.561 "name": "BaseBdev2", 00:08:56.561 "uuid": "476d7b18-4fe8-45ce-ae8a-1984ea546a20", 00:08:56.561 "is_configured": true, 00:08:56.561 "data_offset": 2048, 00:08:56.561 "data_size": 63488 00:08:56.561 }, 00:08:56.561 { 00:08:56.561 "name": "BaseBdev3", 00:08:56.561 "uuid": "2bd294df-0d8e-4408-b67f-fc3d70df73e2", 00:08:56.561 "is_configured": true, 00:08:56.561 "data_offset": 2048, 00:08:56.561 "data_size": 63488 00:08:56.561 } 00:08:56.561 ] 00:08:56.561 }' 00:08:56.561 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.561 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.821 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:56.821 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:56.821 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:56.821 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:56.821 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:56.821 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:56.821 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:56.821 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.821 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.821 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:56.821 [2024-11-19 10:20:10.510435] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:56.821 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.821 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:56.821 "name": "Existed_Raid", 00:08:56.821 "aliases": [ 00:08:56.821 "0359d1f6-581b-430a-a738-7948dd37b750" 00:08:56.821 ], 00:08:56.821 "product_name": "Raid Volume", 00:08:56.821 "block_size": 512, 00:08:56.821 "num_blocks": 190464, 00:08:56.821 "uuid": "0359d1f6-581b-430a-a738-7948dd37b750", 00:08:56.821 "assigned_rate_limits": { 00:08:56.821 "rw_ios_per_sec": 0, 00:08:56.821 "rw_mbytes_per_sec": 0, 00:08:56.821 "r_mbytes_per_sec": 0, 00:08:56.821 "w_mbytes_per_sec": 0 00:08:56.821 }, 00:08:56.821 "claimed": false, 00:08:56.821 "zoned": false, 00:08:56.821 "supported_io_types": { 00:08:56.821 "read": true, 00:08:56.821 "write": true, 00:08:56.821 "unmap": true, 00:08:56.821 "flush": true, 00:08:56.821 "reset": true, 00:08:56.821 "nvme_admin": false, 00:08:56.821 "nvme_io": false, 00:08:56.821 "nvme_io_md": false, 00:08:56.821 "write_zeroes": true, 00:08:56.821 "zcopy": false, 00:08:56.821 "get_zone_info": false, 00:08:56.821 "zone_management": false, 00:08:56.821 "zone_append": false, 00:08:56.821 "compare": false, 00:08:56.821 "compare_and_write": false, 00:08:56.821 "abort": false, 00:08:56.821 "seek_hole": false, 00:08:56.821 "seek_data": false, 00:08:56.821 "copy": false, 00:08:56.821 "nvme_iov_md": false 00:08:56.821 }, 00:08:56.821 "memory_domains": [ 00:08:56.821 { 00:08:56.821 "dma_device_id": "system", 00:08:56.821 "dma_device_type": 1 00:08:56.821 }, 00:08:56.821 { 00:08:56.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.821 "dma_device_type": 2 00:08:56.821 }, 00:08:56.821 { 00:08:56.821 "dma_device_id": "system", 00:08:56.821 "dma_device_type": 1 00:08:56.821 }, 00:08:56.821 { 00:08:56.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.821 "dma_device_type": 2 00:08:56.821 }, 00:08:56.821 { 00:08:56.821 "dma_device_id": "system", 00:08:56.821 "dma_device_type": 1 00:08:56.821 }, 00:08:56.821 { 00:08:56.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.822 "dma_device_type": 2 00:08:56.822 } 00:08:56.822 ], 00:08:56.822 "driver_specific": { 00:08:56.822 "raid": { 00:08:56.822 "uuid": "0359d1f6-581b-430a-a738-7948dd37b750", 00:08:56.822 "strip_size_kb": 64, 00:08:56.822 "state": "online", 00:08:56.822 "raid_level": "concat", 00:08:56.822 "superblock": true, 00:08:56.822 "num_base_bdevs": 3, 00:08:56.822 "num_base_bdevs_discovered": 3, 00:08:56.822 "num_base_bdevs_operational": 3, 00:08:56.822 "base_bdevs_list": [ 00:08:56.822 { 00:08:56.822 "name": "BaseBdev1", 00:08:56.822 "uuid": "dc22c447-e024-427e-93f9-919e7494b081", 00:08:56.822 "is_configured": true, 00:08:56.822 "data_offset": 2048, 00:08:56.822 "data_size": 63488 00:08:56.822 }, 00:08:56.822 { 00:08:56.822 "name": "BaseBdev2", 00:08:56.822 "uuid": "476d7b18-4fe8-45ce-ae8a-1984ea546a20", 00:08:56.822 "is_configured": true, 00:08:56.822 "data_offset": 2048, 00:08:56.822 "data_size": 63488 00:08:56.822 }, 00:08:56.822 { 00:08:56.822 "name": "BaseBdev3", 00:08:56.822 "uuid": "2bd294df-0d8e-4408-b67f-fc3d70df73e2", 00:08:56.822 "is_configured": true, 00:08:56.822 "data_offset": 2048, 00:08:56.822 "data_size": 63488 00:08:56.822 } 00:08:56.822 ] 00:08:56.822 } 00:08:56.822 } 00:08:56.822 }' 00:08:56.822 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:57.082 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:57.082 BaseBdev2 00:08:57.082 BaseBdev3' 00:08:57.082 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.082 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:57.082 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.082 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:57.082 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.082 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.082 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.082 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.082 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.082 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.082 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.083 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:57.083 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.083 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.083 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.083 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.083 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.083 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.083 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.083 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:57.083 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.083 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.083 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.083 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.083 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.083 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.083 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:57.083 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.083 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.083 [2024-11-19 10:20:10.781697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:57.083 [2024-11-19 10:20:10.781765] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:57.083 [2024-11-19 10:20:10.781820] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:57.343 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.343 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:57.343 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:57.343 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:57.343 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:57.343 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:57.343 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:57.343 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.343 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:57.343 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.343 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.343 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:57.343 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.343 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.343 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.343 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.343 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.343 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.343 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.343 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.343 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.343 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.343 "name": "Existed_Raid", 00:08:57.343 "uuid": "0359d1f6-581b-430a-a738-7948dd37b750", 00:08:57.343 "strip_size_kb": 64, 00:08:57.343 "state": "offline", 00:08:57.343 "raid_level": "concat", 00:08:57.343 "superblock": true, 00:08:57.343 "num_base_bdevs": 3, 00:08:57.343 "num_base_bdevs_discovered": 2, 00:08:57.343 "num_base_bdevs_operational": 2, 00:08:57.343 "base_bdevs_list": [ 00:08:57.343 { 00:08:57.343 "name": null, 00:08:57.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.343 "is_configured": false, 00:08:57.343 "data_offset": 0, 00:08:57.343 "data_size": 63488 00:08:57.343 }, 00:08:57.343 { 00:08:57.343 "name": "BaseBdev2", 00:08:57.343 "uuid": "476d7b18-4fe8-45ce-ae8a-1984ea546a20", 00:08:57.343 "is_configured": true, 00:08:57.343 "data_offset": 2048, 00:08:57.343 "data_size": 63488 00:08:57.343 }, 00:08:57.343 { 00:08:57.343 "name": "BaseBdev3", 00:08:57.343 "uuid": "2bd294df-0d8e-4408-b67f-fc3d70df73e2", 00:08:57.343 "is_configured": true, 00:08:57.343 "data_offset": 2048, 00:08:57.343 "data_size": 63488 00:08:57.343 } 00:08:57.343 ] 00:08:57.343 }' 00:08:57.343 10:20:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.343 10:20:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.603 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:57.603 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:57.603 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.603 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:57.603 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.603 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.603 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.863 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:57.863 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:57.863 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:57.863 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.863 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.863 [2024-11-19 10:20:11.413271] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:57.863 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.863 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:57.863 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:57.863 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.863 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:57.863 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.863 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.863 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.863 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:57.863 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:57.863 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:57.863 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.863 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.863 [2024-11-19 10:20:11.558422] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:57.863 [2024-11-19 10:20:11.558516] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.123 BaseBdev2 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.123 [ 00:08:58.123 { 00:08:58.123 "name": "BaseBdev2", 00:08:58.123 "aliases": [ 00:08:58.123 "67d5c7d5-10e6-4676-ae60-db78ac74c32d" 00:08:58.123 ], 00:08:58.123 "product_name": "Malloc disk", 00:08:58.123 "block_size": 512, 00:08:58.123 "num_blocks": 65536, 00:08:58.123 "uuid": "67d5c7d5-10e6-4676-ae60-db78ac74c32d", 00:08:58.123 "assigned_rate_limits": { 00:08:58.123 "rw_ios_per_sec": 0, 00:08:58.123 "rw_mbytes_per_sec": 0, 00:08:58.123 "r_mbytes_per_sec": 0, 00:08:58.123 "w_mbytes_per_sec": 0 00:08:58.123 }, 00:08:58.123 "claimed": false, 00:08:58.123 "zoned": false, 00:08:58.123 "supported_io_types": { 00:08:58.123 "read": true, 00:08:58.123 "write": true, 00:08:58.123 "unmap": true, 00:08:58.123 "flush": true, 00:08:58.123 "reset": true, 00:08:58.123 "nvme_admin": false, 00:08:58.123 "nvme_io": false, 00:08:58.123 "nvme_io_md": false, 00:08:58.123 "write_zeroes": true, 00:08:58.123 "zcopy": true, 00:08:58.123 "get_zone_info": false, 00:08:58.123 "zone_management": false, 00:08:58.123 "zone_append": false, 00:08:58.123 "compare": false, 00:08:58.123 "compare_and_write": false, 00:08:58.123 "abort": true, 00:08:58.123 "seek_hole": false, 00:08:58.123 "seek_data": false, 00:08:58.123 "copy": true, 00:08:58.123 "nvme_iov_md": false 00:08:58.123 }, 00:08:58.123 "memory_domains": [ 00:08:58.123 { 00:08:58.123 "dma_device_id": "system", 00:08:58.123 "dma_device_type": 1 00:08:58.123 }, 00:08:58.123 { 00:08:58.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.123 "dma_device_type": 2 00:08:58.123 } 00:08:58.123 ], 00:08:58.123 "driver_specific": {} 00:08:58.123 } 00:08:58.123 ] 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.123 BaseBdev3 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.123 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.123 [ 00:08:58.123 { 00:08:58.123 "name": "BaseBdev3", 00:08:58.123 "aliases": [ 00:08:58.123 "113e6abd-e975-43a2-a708-d504517f2c12" 00:08:58.123 ], 00:08:58.123 "product_name": "Malloc disk", 00:08:58.123 "block_size": 512, 00:08:58.123 "num_blocks": 65536, 00:08:58.123 "uuid": "113e6abd-e975-43a2-a708-d504517f2c12", 00:08:58.123 "assigned_rate_limits": { 00:08:58.123 "rw_ios_per_sec": 0, 00:08:58.123 "rw_mbytes_per_sec": 0, 00:08:58.123 "r_mbytes_per_sec": 0, 00:08:58.123 "w_mbytes_per_sec": 0 00:08:58.123 }, 00:08:58.123 "claimed": false, 00:08:58.123 "zoned": false, 00:08:58.123 "supported_io_types": { 00:08:58.123 "read": true, 00:08:58.123 "write": true, 00:08:58.123 "unmap": true, 00:08:58.123 "flush": true, 00:08:58.123 "reset": true, 00:08:58.123 "nvme_admin": false, 00:08:58.123 "nvme_io": false, 00:08:58.123 "nvme_io_md": false, 00:08:58.123 "write_zeroes": true, 00:08:58.123 "zcopy": true, 00:08:58.123 "get_zone_info": false, 00:08:58.124 "zone_management": false, 00:08:58.124 "zone_append": false, 00:08:58.124 "compare": false, 00:08:58.124 "compare_and_write": false, 00:08:58.124 "abort": true, 00:08:58.124 "seek_hole": false, 00:08:58.124 "seek_data": false, 00:08:58.124 "copy": true, 00:08:58.124 "nvme_iov_md": false 00:08:58.124 }, 00:08:58.124 "memory_domains": [ 00:08:58.124 { 00:08:58.124 "dma_device_id": "system", 00:08:58.124 "dma_device_type": 1 00:08:58.124 }, 00:08:58.124 { 00:08:58.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.124 "dma_device_type": 2 00:08:58.124 } 00:08:58.124 ], 00:08:58.124 "driver_specific": {} 00:08:58.124 } 00:08:58.124 ] 00:08:58.124 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.124 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:58.124 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:58.124 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:58.124 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:58.124 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.124 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.124 [2024-11-19 10:20:11.849416] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:58.124 [2024-11-19 10:20:11.849517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:58.124 [2024-11-19 10:20:11.849559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:58.124 [2024-11-19 10:20:11.851263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:58.124 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.124 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:58.124 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.124 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.124 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.124 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.124 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.124 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.124 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.124 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.124 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.124 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.124 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.124 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.124 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.124 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.384 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.384 "name": "Existed_Raid", 00:08:58.384 "uuid": "a3feffd6-e9fb-4cd2-b406-0f32231fd96f", 00:08:58.384 "strip_size_kb": 64, 00:08:58.384 "state": "configuring", 00:08:58.384 "raid_level": "concat", 00:08:58.384 "superblock": true, 00:08:58.384 "num_base_bdevs": 3, 00:08:58.384 "num_base_bdevs_discovered": 2, 00:08:58.384 "num_base_bdevs_operational": 3, 00:08:58.384 "base_bdevs_list": [ 00:08:58.384 { 00:08:58.384 "name": "BaseBdev1", 00:08:58.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.384 "is_configured": false, 00:08:58.384 "data_offset": 0, 00:08:58.384 "data_size": 0 00:08:58.384 }, 00:08:58.384 { 00:08:58.384 "name": "BaseBdev2", 00:08:58.384 "uuid": "67d5c7d5-10e6-4676-ae60-db78ac74c32d", 00:08:58.384 "is_configured": true, 00:08:58.384 "data_offset": 2048, 00:08:58.384 "data_size": 63488 00:08:58.384 }, 00:08:58.384 { 00:08:58.384 "name": "BaseBdev3", 00:08:58.384 "uuid": "113e6abd-e975-43a2-a708-d504517f2c12", 00:08:58.384 "is_configured": true, 00:08:58.384 "data_offset": 2048, 00:08:58.384 "data_size": 63488 00:08:58.384 } 00:08:58.384 ] 00:08:58.384 }' 00:08:58.384 10:20:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.384 10:20:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.644 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:58.644 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.644 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.644 [2024-11-19 10:20:12.312633] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:58.644 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.644 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:58.644 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.644 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.644 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:58.644 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.644 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.644 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.644 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.644 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.644 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.644 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.644 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.644 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.644 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.644 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.644 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.644 "name": "Existed_Raid", 00:08:58.644 "uuid": "a3feffd6-e9fb-4cd2-b406-0f32231fd96f", 00:08:58.644 "strip_size_kb": 64, 00:08:58.644 "state": "configuring", 00:08:58.644 "raid_level": "concat", 00:08:58.644 "superblock": true, 00:08:58.644 "num_base_bdevs": 3, 00:08:58.644 "num_base_bdevs_discovered": 1, 00:08:58.644 "num_base_bdevs_operational": 3, 00:08:58.644 "base_bdevs_list": [ 00:08:58.644 { 00:08:58.644 "name": "BaseBdev1", 00:08:58.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.644 "is_configured": false, 00:08:58.644 "data_offset": 0, 00:08:58.644 "data_size": 0 00:08:58.644 }, 00:08:58.644 { 00:08:58.644 "name": null, 00:08:58.644 "uuid": "67d5c7d5-10e6-4676-ae60-db78ac74c32d", 00:08:58.644 "is_configured": false, 00:08:58.644 "data_offset": 0, 00:08:58.644 "data_size": 63488 00:08:58.644 }, 00:08:58.644 { 00:08:58.644 "name": "BaseBdev3", 00:08:58.644 "uuid": "113e6abd-e975-43a2-a708-d504517f2c12", 00:08:58.644 "is_configured": true, 00:08:58.644 "data_offset": 2048, 00:08:58.644 "data_size": 63488 00:08:58.644 } 00:08:58.644 ] 00:08:58.644 }' 00:08:58.644 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.644 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.215 [2024-11-19 10:20:12.790743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:59.215 BaseBdev1 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.215 [ 00:08:59.215 { 00:08:59.215 "name": "BaseBdev1", 00:08:59.215 "aliases": [ 00:08:59.215 "6d568b77-da4a-406e-9d6d-87f7ddfffdd3" 00:08:59.215 ], 00:08:59.215 "product_name": "Malloc disk", 00:08:59.215 "block_size": 512, 00:08:59.215 "num_blocks": 65536, 00:08:59.215 "uuid": "6d568b77-da4a-406e-9d6d-87f7ddfffdd3", 00:08:59.215 "assigned_rate_limits": { 00:08:59.215 "rw_ios_per_sec": 0, 00:08:59.215 "rw_mbytes_per_sec": 0, 00:08:59.215 "r_mbytes_per_sec": 0, 00:08:59.215 "w_mbytes_per_sec": 0 00:08:59.215 }, 00:08:59.215 "claimed": true, 00:08:59.215 "claim_type": "exclusive_write", 00:08:59.215 "zoned": false, 00:08:59.215 "supported_io_types": { 00:08:59.215 "read": true, 00:08:59.215 "write": true, 00:08:59.215 "unmap": true, 00:08:59.215 "flush": true, 00:08:59.215 "reset": true, 00:08:59.215 "nvme_admin": false, 00:08:59.215 "nvme_io": false, 00:08:59.215 "nvme_io_md": false, 00:08:59.215 "write_zeroes": true, 00:08:59.215 "zcopy": true, 00:08:59.215 "get_zone_info": false, 00:08:59.215 "zone_management": false, 00:08:59.215 "zone_append": false, 00:08:59.215 "compare": false, 00:08:59.215 "compare_and_write": false, 00:08:59.215 "abort": true, 00:08:59.215 "seek_hole": false, 00:08:59.215 "seek_data": false, 00:08:59.215 "copy": true, 00:08:59.215 "nvme_iov_md": false 00:08:59.215 }, 00:08:59.215 "memory_domains": [ 00:08:59.215 { 00:08:59.215 "dma_device_id": "system", 00:08:59.215 "dma_device_type": 1 00:08:59.215 }, 00:08:59.215 { 00:08:59.215 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.215 "dma_device_type": 2 00:08:59.215 } 00:08:59.215 ], 00:08:59.215 "driver_specific": {} 00:08:59.215 } 00:08:59.215 ] 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.215 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.215 "name": "Existed_Raid", 00:08:59.216 "uuid": "a3feffd6-e9fb-4cd2-b406-0f32231fd96f", 00:08:59.216 "strip_size_kb": 64, 00:08:59.216 "state": "configuring", 00:08:59.216 "raid_level": "concat", 00:08:59.216 "superblock": true, 00:08:59.216 "num_base_bdevs": 3, 00:08:59.216 "num_base_bdevs_discovered": 2, 00:08:59.216 "num_base_bdevs_operational": 3, 00:08:59.216 "base_bdevs_list": [ 00:08:59.216 { 00:08:59.216 "name": "BaseBdev1", 00:08:59.216 "uuid": "6d568b77-da4a-406e-9d6d-87f7ddfffdd3", 00:08:59.216 "is_configured": true, 00:08:59.216 "data_offset": 2048, 00:08:59.216 "data_size": 63488 00:08:59.216 }, 00:08:59.216 { 00:08:59.216 "name": null, 00:08:59.216 "uuid": "67d5c7d5-10e6-4676-ae60-db78ac74c32d", 00:08:59.216 "is_configured": false, 00:08:59.216 "data_offset": 0, 00:08:59.216 "data_size": 63488 00:08:59.216 }, 00:08:59.216 { 00:08:59.216 "name": "BaseBdev3", 00:08:59.216 "uuid": "113e6abd-e975-43a2-a708-d504517f2c12", 00:08:59.216 "is_configured": true, 00:08:59.216 "data_offset": 2048, 00:08:59.216 "data_size": 63488 00:08:59.216 } 00:08:59.216 ] 00:08:59.216 }' 00:08:59.216 10:20:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.216 10:20:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.786 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.786 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:59.786 10:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.786 10:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.786 10:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.786 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:59.786 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:59.786 10:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.786 10:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.786 [2024-11-19 10:20:13.317867] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:59.786 10:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.786 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:59.786 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.786 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.786 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:59.786 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.786 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.786 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.786 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.786 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.786 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.786 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.786 10:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.786 10:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.786 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.786 10:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.786 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.786 "name": "Existed_Raid", 00:08:59.786 "uuid": "a3feffd6-e9fb-4cd2-b406-0f32231fd96f", 00:08:59.786 "strip_size_kb": 64, 00:08:59.786 "state": "configuring", 00:08:59.786 "raid_level": "concat", 00:08:59.786 "superblock": true, 00:08:59.786 "num_base_bdevs": 3, 00:08:59.786 "num_base_bdevs_discovered": 1, 00:08:59.786 "num_base_bdevs_operational": 3, 00:08:59.786 "base_bdevs_list": [ 00:08:59.786 { 00:08:59.786 "name": "BaseBdev1", 00:08:59.786 "uuid": "6d568b77-da4a-406e-9d6d-87f7ddfffdd3", 00:08:59.786 "is_configured": true, 00:08:59.786 "data_offset": 2048, 00:08:59.786 "data_size": 63488 00:08:59.786 }, 00:08:59.786 { 00:08:59.786 "name": null, 00:08:59.787 "uuid": "67d5c7d5-10e6-4676-ae60-db78ac74c32d", 00:08:59.787 "is_configured": false, 00:08:59.787 "data_offset": 0, 00:08:59.787 "data_size": 63488 00:08:59.787 }, 00:08:59.787 { 00:08:59.787 "name": null, 00:08:59.787 "uuid": "113e6abd-e975-43a2-a708-d504517f2c12", 00:08:59.787 "is_configured": false, 00:08:59.787 "data_offset": 0, 00:08:59.787 "data_size": 63488 00:08:59.787 } 00:08:59.787 ] 00:08:59.787 }' 00:08:59.787 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.787 10:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.047 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.047 10:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.047 10:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.047 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:00.047 10:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.047 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:00.047 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:00.047 10:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.047 10:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.047 [2024-11-19 10:20:13.797084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:00.047 10:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.047 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:00.047 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.047 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.047 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:00.047 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.047 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.047 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.047 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.047 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.047 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.047 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.047 10:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.047 10:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.047 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.047 10:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.309 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.309 "name": "Existed_Raid", 00:09:00.309 "uuid": "a3feffd6-e9fb-4cd2-b406-0f32231fd96f", 00:09:00.309 "strip_size_kb": 64, 00:09:00.309 "state": "configuring", 00:09:00.309 "raid_level": "concat", 00:09:00.309 "superblock": true, 00:09:00.309 "num_base_bdevs": 3, 00:09:00.309 "num_base_bdevs_discovered": 2, 00:09:00.309 "num_base_bdevs_operational": 3, 00:09:00.309 "base_bdevs_list": [ 00:09:00.309 { 00:09:00.309 "name": "BaseBdev1", 00:09:00.309 "uuid": "6d568b77-da4a-406e-9d6d-87f7ddfffdd3", 00:09:00.309 "is_configured": true, 00:09:00.309 "data_offset": 2048, 00:09:00.309 "data_size": 63488 00:09:00.309 }, 00:09:00.309 { 00:09:00.309 "name": null, 00:09:00.309 "uuid": "67d5c7d5-10e6-4676-ae60-db78ac74c32d", 00:09:00.309 "is_configured": false, 00:09:00.309 "data_offset": 0, 00:09:00.309 "data_size": 63488 00:09:00.309 }, 00:09:00.309 { 00:09:00.309 "name": "BaseBdev3", 00:09:00.309 "uuid": "113e6abd-e975-43a2-a708-d504517f2c12", 00:09:00.309 "is_configured": true, 00:09:00.309 "data_offset": 2048, 00:09:00.309 "data_size": 63488 00:09:00.309 } 00:09:00.309 ] 00:09:00.309 }' 00:09:00.309 10:20:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.309 10:20:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.569 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:00.569 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.569 10:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.569 10:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.569 10:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.569 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:00.569 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:00.569 10:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.569 10:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.569 [2024-11-19 10:20:14.220343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:00.569 10:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.569 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:00.569 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.569 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.570 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:00.570 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.570 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.570 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.570 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.570 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.570 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.570 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.570 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.570 10:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.570 10:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.570 10:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.829 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.829 "name": "Existed_Raid", 00:09:00.829 "uuid": "a3feffd6-e9fb-4cd2-b406-0f32231fd96f", 00:09:00.829 "strip_size_kb": 64, 00:09:00.829 "state": "configuring", 00:09:00.829 "raid_level": "concat", 00:09:00.829 "superblock": true, 00:09:00.829 "num_base_bdevs": 3, 00:09:00.829 "num_base_bdevs_discovered": 1, 00:09:00.829 "num_base_bdevs_operational": 3, 00:09:00.829 "base_bdevs_list": [ 00:09:00.829 { 00:09:00.829 "name": null, 00:09:00.829 "uuid": "6d568b77-da4a-406e-9d6d-87f7ddfffdd3", 00:09:00.829 "is_configured": false, 00:09:00.829 "data_offset": 0, 00:09:00.829 "data_size": 63488 00:09:00.829 }, 00:09:00.829 { 00:09:00.829 "name": null, 00:09:00.829 "uuid": "67d5c7d5-10e6-4676-ae60-db78ac74c32d", 00:09:00.829 "is_configured": false, 00:09:00.829 "data_offset": 0, 00:09:00.829 "data_size": 63488 00:09:00.829 }, 00:09:00.829 { 00:09:00.829 "name": "BaseBdev3", 00:09:00.829 "uuid": "113e6abd-e975-43a2-a708-d504517f2c12", 00:09:00.829 "is_configured": true, 00:09:00.829 "data_offset": 2048, 00:09:00.829 "data_size": 63488 00:09:00.829 } 00:09:00.829 ] 00:09:00.829 }' 00:09:00.829 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.830 10:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.090 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:01.090 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.090 10:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.090 10:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.090 10:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.090 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:01.090 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:01.090 10:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.090 10:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.090 [2024-11-19 10:20:14.813534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:01.090 10:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.090 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:01.090 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.090 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.090 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.090 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.090 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.090 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.090 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.090 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.090 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.090 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.090 10:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.090 10:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.090 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.090 10:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.350 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.350 "name": "Existed_Raid", 00:09:01.350 "uuid": "a3feffd6-e9fb-4cd2-b406-0f32231fd96f", 00:09:01.350 "strip_size_kb": 64, 00:09:01.350 "state": "configuring", 00:09:01.350 "raid_level": "concat", 00:09:01.350 "superblock": true, 00:09:01.350 "num_base_bdevs": 3, 00:09:01.350 "num_base_bdevs_discovered": 2, 00:09:01.350 "num_base_bdevs_operational": 3, 00:09:01.350 "base_bdevs_list": [ 00:09:01.350 { 00:09:01.350 "name": null, 00:09:01.350 "uuid": "6d568b77-da4a-406e-9d6d-87f7ddfffdd3", 00:09:01.350 "is_configured": false, 00:09:01.350 "data_offset": 0, 00:09:01.350 "data_size": 63488 00:09:01.350 }, 00:09:01.350 { 00:09:01.350 "name": "BaseBdev2", 00:09:01.350 "uuid": "67d5c7d5-10e6-4676-ae60-db78ac74c32d", 00:09:01.350 "is_configured": true, 00:09:01.350 "data_offset": 2048, 00:09:01.350 "data_size": 63488 00:09:01.350 }, 00:09:01.350 { 00:09:01.350 "name": "BaseBdev3", 00:09:01.350 "uuid": "113e6abd-e975-43a2-a708-d504517f2c12", 00:09:01.350 "is_configured": true, 00:09:01.350 "data_offset": 2048, 00:09:01.350 "data_size": 63488 00:09:01.350 } 00:09:01.350 ] 00:09:01.350 }' 00:09:01.350 10:20:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.350 10:20:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6d568b77-da4a-406e-9d6d-87f7ddfffdd3 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.611 [2024-11-19 10:20:15.335852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:01.611 [2024-11-19 10:20:15.336083] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:01.611 [2024-11-19 10:20:15.336099] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:01.611 [2024-11-19 10:20:15.336338] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:01.611 [2024-11-19 10:20:15.336498] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:01.611 [2024-11-19 10:20:15.336508] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:01.611 [2024-11-19 10:20:15.336657] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:01.611 NewBaseBdev 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.611 [ 00:09:01.611 { 00:09:01.611 "name": "NewBaseBdev", 00:09:01.611 "aliases": [ 00:09:01.611 "6d568b77-da4a-406e-9d6d-87f7ddfffdd3" 00:09:01.611 ], 00:09:01.611 "product_name": "Malloc disk", 00:09:01.611 "block_size": 512, 00:09:01.611 "num_blocks": 65536, 00:09:01.611 "uuid": "6d568b77-da4a-406e-9d6d-87f7ddfffdd3", 00:09:01.611 "assigned_rate_limits": { 00:09:01.611 "rw_ios_per_sec": 0, 00:09:01.611 "rw_mbytes_per_sec": 0, 00:09:01.611 "r_mbytes_per_sec": 0, 00:09:01.611 "w_mbytes_per_sec": 0 00:09:01.611 }, 00:09:01.611 "claimed": true, 00:09:01.611 "claim_type": "exclusive_write", 00:09:01.611 "zoned": false, 00:09:01.611 "supported_io_types": { 00:09:01.611 "read": true, 00:09:01.611 "write": true, 00:09:01.611 "unmap": true, 00:09:01.611 "flush": true, 00:09:01.611 "reset": true, 00:09:01.611 "nvme_admin": false, 00:09:01.611 "nvme_io": false, 00:09:01.611 "nvme_io_md": false, 00:09:01.611 "write_zeroes": true, 00:09:01.611 "zcopy": true, 00:09:01.611 "get_zone_info": false, 00:09:01.611 "zone_management": false, 00:09:01.611 "zone_append": false, 00:09:01.611 "compare": false, 00:09:01.611 "compare_and_write": false, 00:09:01.611 "abort": true, 00:09:01.611 "seek_hole": false, 00:09:01.611 "seek_data": false, 00:09:01.611 "copy": true, 00:09:01.611 "nvme_iov_md": false 00:09:01.611 }, 00:09:01.611 "memory_domains": [ 00:09:01.611 { 00:09:01.611 "dma_device_id": "system", 00:09:01.611 "dma_device_type": 1 00:09:01.611 }, 00:09:01.611 { 00:09:01.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.611 "dma_device_type": 2 00:09:01.611 } 00:09:01.611 ], 00:09:01.611 "driver_specific": {} 00:09:01.611 } 00:09:01.611 ] 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.611 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.871 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.871 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.871 "name": "Existed_Raid", 00:09:01.871 "uuid": "a3feffd6-e9fb-4cd2-b406-0f32231fd96f", 00:09:01.871 "strip_size_kb": 64, 00:09:01.872 "state": "online", 00:09:01.872 "raid_level": "concat", 00:09:01.872 "superblock": true, 00:09:01.872 "num_base_bdevs": 3, 00:09:01.872 "num_base_bdevs_discovered": 3, 00:09:01.872 "num_base_bdevs_operational": 3, 00:09:01.872 "base_bdevs_list": [ 00:09:01.872 { 00:09:01.872 "name": "NewBaseBdev", 00:09:01.872 "uuid": "6d568b77-da4a-406e-9d6d-87f7ddfffdd3", 00:09:01.872 "is_configured": true, 00:09:01.872 "data_offset": 2048, 00:09:01.872 "data_size": 63488 00:09:01.872 }, 00:09:01.872 { 00:09:01.872 "name": "BaseBdev2", 00:09:01.872 "uuid": "67d5c7d5-10e6-4676-ae60-db78ac74c32d", 00:09:01.872 "is_configured": true, 00:09:01.872 "data_offset": 2048, 00:09:01.872 "data_size": 63488 00:09:01.872 }, 00:09:01.872 { 00:09:01.872 "name": "BaseBdev3", 00:09:01.872 "uuid": "113e6abd-e975-43a2-a708-d504517f2c12", 00:09:01.872 "is_configured": true, 00:09:01.872 "data_offset": 2048, 00:09:01.872 "data_size": 63488 00:09:01.872 } 00:09:01.872 ] 00:09:01.872 }' 00:09:01.872 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.872 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.131 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:02.131 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:02.131 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:02.131 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:02.131 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:02.131 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:02.131 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:02.131 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:02.131 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.131 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.131 [2024-11-19 10:20:15.843320] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:02.131 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.131 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:02.131 "name": "Existed_Raid", 00:09:02.131 "aliases": [ 00:09:02.131 "a3feffd6-e9fb-4cd2-b406-0f32231fd96f" 00:09:02.131 ], 00:09:02.131 "product_name": "Raid Volume", 00:09:02.131 "block_size": 512, 00:09:02.131 "num_blocks": 190464, 00:09:02.131 "uuid": "a3feffd6-e9fb-4cd2-b406-0f32231fd96f", 00:09:02.131 "assigned_rate_limits": { 00:09:02.131 "rw_ios_per_sec": 0, 00:09:02.131 "rw_mbytes_per_sec": 0, 00:09:02.131 "r_mbytes_per_sec": 0, 00:09:02.131 "w_mbytes_per_sec": 0 00:09:02.131 }, 00:09:02.131 "claimed": false, 00:09:02.131 "zoned": false, 00:09:02.131 "supported_io_types": { 00:09:02.131 "read": true, 00:09:02.131 "write": true, 00:09:02.131 "unmap": true, 00:09:02.131 "flush": true, 00:09:02.131 "reset": true, 00:09:02.131 "nvme_admin": false, 00:09:02.131 "nvme_io": false, 00:09:02.131 "nvme_io_md": false, 00:09:02.131 "write_zeroes": true, 00:09:02.131 "zcopy": false, 00:09:02.131 "get_zone_info": false, 00:09:02.131 "zone_management": false, 00:09:02.131 "zone_append": false, 00:09:02.131 "compare": false, 00:09:02.131 "compare_and_write": false, 00:09:02.131 "abort": false, 00:09:02.131 "seek_hole": false, 00:09:02.131 "seek_data": false, 00:09:02.131 "copy": false, 00:09:02.131 "nvme_iov_md": false 00:09:02.131 }, 00:09:02.131 "memory_domains": [ 00:09:02.131 { 00:09:02.131 "dma_device_id": "system", 00:09:02.131 "dma_device_type": 1 00:09:02.131 }, 00:09:02.131 { 00:09:02.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.131 "dma_device_type": 2 00:09:02.131 }, 00:09:02.131 { 00:09:02.131 "dma_device_id": "system", 00:09:02.131 "dma_device_type": 1 00:09:02.131 }, 00:09:02.131 { 00:09:02.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.131 "dma_device_type": 2 00:09:02.131 }, 00:09:02.131 { 00:09:02.131 "dma_device_id": "system", 00:09:02.131 "dma_device_type": 1 00:09:02.131 }, 00:09:02.131 { 00:09:02.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.131 "dma_device_type": 2 00:09:02.131 } 00:09:02.131 ], 00:09:02.131 "driver_specific": { 00:09:02.131 "raid": { 00:09:02.131 "uuid": "a3feffd6-e9fb-4cd2-b406-0f32231fd96f", 00:09:02.131 "strip_size_kb": 64, 00:09:02.131 "state": "online", 00:09:02.131 "raid_level": "concat", 00:09:02.131 "superblock": true, 00:09:02.131 "num_base_bdevs": 3, 00:09:02.131 "num_base_bdevs_discovered": 3, 00:09:02.131 "num_base_bdevs_operational": 3, 00:09:02.131 "base_bdevs_list": [ 00:09:02.131 { 00:09:02.131 "name": "NewBaseBdev", 00:09:02.131 "uuid": "6d568b77-da4a-406e-9d6d-87f7ddfffdd3", 00:09:02.131 "is_configured": true, 00:09:02.131 "data_offset": 2048, 00:09:02.131 "data_size": 63488 00:09:02.131 }, 00:09:02.131 { 00:09:02.131 "name": "BaseBdev2", 00:09:02.131 "uuid": "67d5c7d5-10e6-4676-ae60-db78ac74c32d", 00:09:02.131 "is_configured": true, 00:09:02.131 "data_offset": 2048, 00:09:02.132 "data_size": 63488 00:09:02.132 }, 00:09:02.132 { 00:09:02.132 "name": "BaseBdev3", 00:09:02.132 "uuid": "113e6abd-e975-43a2-a708-d504517f2c12", 00:09:02.132 "is_configured": true, 00:09:02.132 "data_offset": 2048, 00:09:02.132 "data_size": 63488 00:09:02.132 } 00:09:02.132 ] 00:09:02.132 } 00:09:02.132 } 00:09:02.132 }' 00:09:02.132 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:02.392 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:02.392 BaseBdev2 00:09:02.392 BaseBdev3' 00:09:02.392 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.392 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:02.392 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.392 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:02.392 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.392 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.392 10:20:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.392 10:20:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.392 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.392 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.392 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.392 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.392 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:02.392 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.392 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.393 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.393 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.393 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.393 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:02.393 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:02.393 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:02.393 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.393 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.393 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.393 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:02.393 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:02.393 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:02.393 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.393 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.393 [2024-11-19 10:20:16.098564] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:02.393 [2024-11-19 10:20:16.098591] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:02.393 [2024-11-19 10:20:16.098656] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:02.393 [2024-11-19 10:20:16.098710] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:02.393 [2024-11-19 10:20:16.098721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:02.393 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.393 10:20:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66058 00:09:02.393 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66058 ']' 00:09:02.393 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66058 00:09:02.393 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:02.393 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:02.393 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66058 00:09:02.393 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:02.393 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:02.393 killing process with pid 66058 00:09:02.393 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66058' 00:09:02.393 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66058 00:09:02.393 [2024-11-19 10:20:16.145157] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:02.393 10:20:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66058 00:09:02.652 [2024-11-19 10:20:16.430216] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:04.035 10:20:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:04.035 00:09:04.035 real 0m10.364s 00:09:04.035 user 0m16.637s 00:09:04.035 sys 0m1.713s 00:09:04.035 10:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.035 10:20:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.035 ************************************ 00:09:04.035 END TEST raid_state_function_test_sb 00:09:04.035 ************************************ 00:09:04.035 10:20:17 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:04.035 10:20:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:04.035 10:20:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.035 10:20:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:04.035 ************************************ 00:09:04.035 START TEST raid_superblock_test 00:09:04.035 ************************************ 00:09:04.035 10:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:09:04.035 10:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:04.035 10:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:04.035 10:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:04.035 10:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:04.035 10:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:04.035 10:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:04.035 10:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:04.035 10:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:04.035 10:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:04.035 10:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:04.035 10:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:04.035 10:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:04.035 10:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:04.035 10:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:04.035 10:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:04.035 10:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:04.035 10:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66674 00:09:04.035 10:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66674 00:09:04.035 10:20:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:04.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.035 10:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66674 ']' 00:09:04.035 10:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.035 10:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.035 10:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.035 10:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.035 10:20:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.035 [2024-11-19 10:20:17.633601] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:09:04.035 [2024-11-19 10:20:17.633727] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66674 ] 00:09:04.035 [2024-11-19 10:20:17.808181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.295 [2024-11-19 10:20:17.913421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.555 [2024-11-19 10:20:18.099118] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.555 [2024-11-19 10:20:18.099157] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.816 malloc1 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.816 [2024-11-19 10:20:18.483886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:04.816 [2024-11-19 10:20:18.483954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.816 [2024-11-19 10:20:18.483978] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:04.816 [2024-11-19 10:20:18.483987] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.816 [2024-11-19 10:20:18.485982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.816 [2024-11-19 10:20:18.486041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:04.816 pt1 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.816 malloc2 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.816 [2024-11-19 10:20:18.534701] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:04.816 [2024-11-19 10:20:18.534770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:04.816 [2024-11-19 10:20:18.534805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:04.816 [2024-11-19 10:20:18.534815] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:04.816 [2024-11-19 10:20:18.536866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:04.816 [2024-11-19 10:20:18.536900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:04.816 pt2 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.816 10:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.078 malloc3 00:09:05.078 10:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.078 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:05.078 10:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.078 10:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.078 [2024-11-19 10:20:18.620545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:05.078 [2024-11-19 10:20:18.620594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.078 [2024-11-19 10:20:18.620630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:05.078 [2024-11-19 10:20:18.620639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.078 [2024-11-19 10:20:18.622596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.078 [2024-11-19 10:20:18.622630] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:05.078 pt3 00:09:05.078 10:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.078 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:05.078 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:05.078 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:05.078 10:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.078 10:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.078 [2024-11-19 10:20:18.636575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:05.078 [2024-11-19 10:20:18.638356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:05.079 [2024-11-19 10:20:18.638422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:05.079 [2024-11-19 10:20:18.638578] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:05.079 [2024-11-19 10:20:18.638600] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:05.079 [2024-11-19 10:20:18.638839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:05.079 [2024-11-19 10:20:18.639022] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:05.079 [2024-11-19 10:20:18.639038] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:05.079 [2024-11-19 10:20:18.639201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.079 10:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.079 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:05.079 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:05.079 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.079 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.079 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.079 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.079 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.079 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.079 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.079 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.079 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.079 10:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.079 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:05.079 10:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.079 10:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.079 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.079 "name": "raid_bdev1", 00:09:05.079 "uuid": "0534c169-85c9-413b-99d6-9de414b1d5bb", 00:09:05.079 "strip_size_kb": 64, 00:09:05.079 "state": "online", 00:09:05.079 "raid_level": "concat", 00:09:05.079 "superblock": true, 00:09:05.079 "num_base_bdevs": 3, 00:09:05.079 "num_base_bdevs_discovered": 3, 00:09:05.079 "num_base_bdevs_operational": 3, 00:09:05.079 "base_bdevs_list": [ 00:09:05.079 { 00:09:05.079 "name": "pt1", 00:09:05.079 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:05.079 "is_configured": true, 00:09:05.079 "data_offset": 2048, 00:09:05.079 "data_size": 63488 00:09:05.079 }, 00:09:05.079 { 00:09:05.079 "name": "pt2", 00:09:05.079 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:05.079 "is_configured": true, 00:09:05.079 "data_offset": 2048, 00:09:05.079 "data_size": 63488 00:09:05.079 }, 00:09:05.079 { 00:09:05.079 "name": "pt3", 00:09:05.079 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:05.079 "is_configured": true, 00:09:05.079 "data_offset": 2048, 00:09:05.079 "data_size": 63488 00:09:05.079 } 00:09:05.079 ] 00:09:05.079 }' 00:09:05.079 10:20:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.079 10:20:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.343 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:05.343 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:05.343 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:05.343 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:05.343 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:05.343 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:05.343 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:05.343 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.343 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.343 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:05.343 [2024-11-19 10:20:19.052150] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.343 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.343 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:05.343 "name": "raid_bdev1", 00:09:05.343 "aliases": [ 00:09:05.343 "0534c169-85c9-413b-99d6-9de414b1d5bb" 00:09:05.343 ], 00:09:05.343 "product_name": "Raid Volume", 00:09:05.343 "block_size": 512, 00:09:05.343 "num_blocks": 190464, 00:09:05.343 "uuid": "0534c169-85c9-413b-99d6-9de414b1d5bb", 00:09:05.343 "assigned_rate_limits": { 00:09:05.343 "rw_ios_per_sec": 0, 00:09:05.343 "rw_mbytes_per_sec": 0, 00:09:05.343 "r_mbytes_per_sec": 0, 00:09:05.343 "w_mbytes_per_sec": 0 00:09:05.343 }, 00:09:05.343 "claimed": false, 00:09:05.343 "zoned": false, 00:09:05.343 "supported_io_types": { 00:09:05.343 "read": true, 00:09:05.343 "write": true, 00:09:05.343 "unmap": true, 00:09:05.343 "flush": true, 00:09:05.343 "reset": true, 00:09:05.343 "nvme_admin": false, 00:09:05.343 "nvme_io": false, 00:09:05.343 "nvme_io_md": false, 00:09:05.343 "write_zeroes": true, 00:09:05.343 "zcopy": false, 00:09:05.344 "get_zone_info": false, 00:09:05.344 "zone_management": false, 00:09:05.344 "zone_append": false, 00:09:05.344 "compare": false, 00:09:05.344 "compare_and_write": false, 00:09:05.344 "abort": false, 00:09:05.344 "seek_hole": false, 00:09:05.344 "seek_data": false, 00:09:05.344 "copy": false, 00:09:05.344 "nvme_iov_md": false 00:09:05.344 }, 00:09:05.344 "memory_domains": [ 00:09:05.344 { 00:09:05.344 "dma_device_id": "system", 00:09:05.344 "dma_device_type": 1 00:09:05.344 }, 00:09:05.344 { 00:09:05.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.344 "dma_device_type": 2 00:09:05.344 }, 00:09:05.344 { 00:09:05.344 "dma_device_id": "system", 00:09:05.344 "dma_device_type": 1 00:09:05.344 }, 00:09:05.344 { 00:09:05.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.344 "dma_device_type": 2 00:09:05.344 }, 00:09:05.344 { 00:09:05.344 "dma_device_id": "system", 00:09:05.344 "dma_device_type": 1 00:09:05.344 }, 00:09:05.344 { 00:09:05.344 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.344 "dma_device_type": 2 00:09:05.344 } 00:09:05.344 ], 00:09:05.344 "driver_specific": { 00:09:05.344 "raid": { 00:09:05.344 "uuid": "0534c169-85c9-413b-99d6-9de414b1d5bb", 00:09:05.344 "strip_size_kb": 64, 00:09:05.344 "state": "online", 00:09:05.344 "raid_level": "concat", 00:09:05.344 "superblock": true, 00:09:05.344 "num_base_bdevs": 3, 00:09:05.344 "num_base_bdevs_discovered": 3, 00:09:05.344 "num_base_bdevs_operational": 3, 00:09:05.344 "base_bdevs_list": [ 00:09:05.344 { 00:09:05.344 "name": "pt1", 00:09:05.344 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:05.344 "is_configured": true, 00:09:05.344 "data_offset": 2048, 00:09:05.344 "data_size": 63488 00:09:05.344 }, 00:09:05.344 { 00:09:05.344 "name": "pt2", 00:09:05.344 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:05.344 "is_configured": true, 00:09:05.344 "data_offset": 2048, 00:09:05.344 "data_size": 63488 00:09:05.344 }, 00:09:05.344 { 00:09:05.344 "name": "pt3", 00:09:05.344 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:05.344 "is_configured": true, 00:09:05.344 "data_offset": 2048, 00:09:05.344 "data_size": 63488 00:09:05.344 } 00:09:05.344 ] 00:09:05.344 } 00:09:05.344 } 00:09:05.344 }' 00:09:05.344 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:05.344 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:05.344 pt2 00:09:05.344 pt3' 00:09:05.344 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.612 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:05.612 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.612 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:05.612 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.612 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.612 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.612 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.612 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.612 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.612 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.612 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:05.612 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.612 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.612 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.612 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.612 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.612 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.612 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.612 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.612 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:05.612 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.612 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.612 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.612 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.613 [2024-11-19 10:20:19.295642] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0534c169-85c9-413b-99d6-9de414b1d5bb 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 0534c169-85c9-413b-99d6-9de414b1d5bb ']' 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.613 [2024-11-19 10:20:19.327341] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:05.613 [2024-11-19 10:20:19.327369] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:05.613 [2024-11-19 10:20:19.327436] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.613 [2024-11-19 10:20:19.327497] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:05.613 [2024-11-19 10:20:19.327506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.613 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.880 [2024-11-19 10:20:19.463172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:05.880 [2024-11-19 10:20:19.464899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:05.880 [2024-11-19 10:20:19.464953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:05.880 [2024-11-19 10:20:19.465011] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:05.880 [2024-11-19 10:20:19.465057] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:05.880 [2024-11-19 10:20:19.465075] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:05.880 [2024-11-19 10:20:19.465091] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:05.880 [2024-11-19 10:20:19.465100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:05.880 request: 00:09:05.880 { 00:09:05.880 "name": "raid_bdev1", 00:09:05.880 "raid_level": "concat", 00:09:05.880 "base_bdevs": [ 00:09:05.880 "malloc1", 00:09:05.880 "malloc2", 00:09:05.880 "malloc3" 00:09:05.880 ], 00:09:05.880 "strip_size_kb": 64, 00:09:05.880 "superblock": false, 00:09:05.880 "method": "bdev_raid_create", 00:09:05.880 "req_id": 1 00:09:05.880 } 00:09:05.880 Got JSON-RPC error response 00:09:05.880 response: 00:09:05.880 { 00:09:05.880 "code": -17, 00:09:05.880 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:05.880 } 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.880 [2024-11-19 10:20:19.527024] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:05.880 [2024-11-19 10:20:19.527066] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.880 [2024-11-19 10:20:19.527081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:05.880 [2024-11-19 10:20:19.527089] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.880 [2024-11-19 10:20:19.529185] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.880 [2024-11-19 10:20:19.529217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:05.880 [2024-11-19 10:20:19.529284] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:05.880 [2024-11-19 10:20:19.529338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:05.880 pt1 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.880 "name": "raid_bdev1", 00:09:05.880 "uuid": "0534c169-85c9-413b-99d6-9de414b1d5bb", 00:09:05.880 "strip_size_kb": 64, 00:09:05.880 "state": "configuring", 00:09:05.880 "raid_level": "concat", 00:09:05.880 "superblock": true, 00:09:05.880 "num_base_bdevs": 3, 00:09:05.880 "num_base_bdevs_discovered": 1, 00:09:05.880 "num_base_bdevs_operational": 3, 00:09:05.880 "base_bdevs_list": [ 00:09:05.880 { 00:09:05.880 "name": "pt1", 00:09:05.880 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:05.880 "is_configured": true, 00:09:05.880 "data_offset": 2048, 00:09:05.880 "data_size": 63488 00:09:05.880 }, 00:09:05.880 { 00:09:05.880 "name": null, 00:09:05.880 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:05.880 "is_configured": false, 00:09:05.880 "data_offset": 2048, 00:09:05.880 "data_size": 63488 00:09:05.880 }, 00:09:05.880 { 00:09:05.880 "name": null, 00:09:05.880 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:05.880 "is_configured": false, 00:09:05.880 "data_offset": 2048, 00:09:05.880 "data_size": 63488 00:09:05.880 } 00:09:05.880 ] 00:09:05.880 }' 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.880 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.449 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:06.449 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:06.449 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.449 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.449 [2024-11-19 10:20:19.954307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:06.449 [2024-11-19 10:20:19.954410] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.449 [2024-11-19 10:20:19.954450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:06.449 [2024-11-19 10:20:19.954498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.449 [2024-11-19 10:20:19.954944] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.449 [2024-11-19 10:20:19.955020] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:06.449 [2024-11-19 10:20:19.955138] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:06.449 [2024-11-19 10:20:19.955190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:06.449 pt2 00:09:06.449 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.449 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:06.449 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.449 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.449 [2024-11-19 10:20:19.966289] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:06.449 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.449 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:06.449 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:06.449 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:06.449 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.449 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.449 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.449 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.449 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.449 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.449 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.449 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:06.449 10:20:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.449 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.450 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.450 10:20:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.450 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.450 "name": "raid_bdev1", 00:09:06.450 "uuid": "0534c169-85c9-413b-99d6-9de414b1d5bb", 00:09:06.450 "strip_size_kb": 64, 00:09:06.450 "state": "configuring", 00:09:06.450 "raid_level": "concat", 00:09:06.450 "superblock": true, 00:09:06.450 "num_base_bdevs": 3, 00:09:06.450 "num_base_bdevs_discovered": 1, 00:09:06.450 "num_base_bdevs_operational": 3, 00:09:06.450 "base_bdevs_list": [ 00:09:06.450 { 00:09:06.450 "name": "pt1", 00:09:06.450 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:06.450 "is_configured": true, 00:09:06.450 "data_offset": 2048, 00:09:06.450 "data_size": 63488 00:09:06.450 }, 00:09:06.450 { 00:09:06.450 "name": null, 00:09:06.450 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:06.450 "is_configured": false, 00:09:06.450 "data_offset": 0, 00:09:06.450 "data_size": 63488 00:09:06.450 }, 00:09:06.450 { 00:09:06.450 "name": null, 00:09:06.450 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:06.450 "is_configured": false, 00:09:06.450 "data_offset": 2048, 00:09:06.450 "data_size": 63488 00:09:06.450 } 00:09:06.450 ] 00:09:06.450 }' 00:09:06.450 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.450 10:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.709 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:06.709 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:06.709 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:06.709 10:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.709 10:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.710 [2024-11-19 10:20:20.397552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:06.710 [2024-11-19 10:20:20.397657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.710 [2024-11-19 10:20:20.397676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:06.710 [2024-11-19 10:20:20.397686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.710 [2024-11-19 10:20:20.398146] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.710 [2024-11-19 10:20:20.398174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:06.710 [2024-11-19 10:20:20.398248] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:06.710 [2024-11-19 10:20:20.398272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:06.710 pt2 00:09:06.710 10:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.710 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:06.710 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:06.710 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:06.710 10:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.710 10:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.710 [2024-11-19 10:20:20.409523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:06.710 [2024-11-19 10:20:20.409569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:06.710 [2024-11-19 10:20:20.409598] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:06.710 [2024-11-19 10:20:20.409607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:06.710 [2024-11-19 10:20:20.409955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:06.710 [2024-11-19 10:20:20.409989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:06.710 [2024-11-19 10:20:20.410059] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:06.710 [2024-11-19 10:20:20.410079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:06.710 [2024-11-19 10:20:20.410182] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:06.710 [2024-11-19 10:20:20.410192] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:06.710 [2024-11-19 10:20:20.410425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:06.710 [2024-11-19 10:20:20.410562] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:06.710 [2024-11-19 10:20:20.410570] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:06.710 [2024-11-19 10:20:20.410691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:06.710 pt3 00:09:06.710 10:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.710 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:06.710 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:06.710 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:06.710 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:06.710 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.710 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.710 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.710 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.710 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.710 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.710 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.710 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.710 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:06.710 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.710 10:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.710 10:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.710 10:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.710 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.710 "name": "raid_bdev1", 00:09:06.710 "uuid": "0534c169-85c9-413b-99d6-9de414b1d5bb", 00:09:06.710 "strip_size_kb": 64, 00:09:06.710 "state": "online", 00:09:06.710 "raid_level": "concat", 00:09:06.710 "superblock": true, 00:09:06.710 "num_base_bdevs": 3, 00:09:06.710 "num_base_bdevs_discovered": 3, 00:09:06.710 "num_base_bdevs_operational": 3, 00:09:06.710 "base_bdevs_list": [ 00:09:06.710 { 00:09:06.710 "name": "pt1", 00:09:06.710 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:06.710 "is_configured": true, 00:09:06.710 "data_offset": 2048, 00:09:06.710 "data_size": 63488 00:09:06.710 }, 00:09:06.710 { 00:09:06.710 "name": "pt2", 00:09:06.710 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:06.710 "is_configured": true, 00:09:06.710 "data_offset": 2048, 00:09:06.710 "data_size": 63488 00:09:06.710 }, 00:09:06.710 { 00:09:06.710 "name": "pt3", 00:09:06.710 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:06.710 "is_configured": true, 00:09:06.710 "data_offset": 2048, 00:09:06.710 "data_size": 63488 00:09:06.710 } 00:09:06.710 ] 00:09:06.710 }' 00:09:06.710 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.710 10:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.280 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:07.280 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:07.280 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:07.280 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:07.280 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:07.280 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:07.280 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:07.280 10:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.280 10:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.280 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:07.280 [2024-11-19 10:20:20.845107] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:07.280 10:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.280 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:07.280 "name": "raid_bdev1", 00:09:07.280 "aliases": [ 00:09:07.280 "0534c169-85c9-413b-99d6-9de414b1d5bb" 00:09:07.280 ], 00:09:07.280 "product_name": "Raid Volume", 00:09:07.280 "block_size": 512, 00:09:07.280 "num_blocks": 190464, 00:09:07.280 "uuid": "0534c169-85c9-413b-99d6-9de414b1d5bb", 00:09:07.280 "assigned_rate_limits": { 00:09:07.280 "rw_ios_per_sec": 0, 00:09:07.280 "rw_mbytes_per_sec": 0, 00:09:07.280 "r_mbytes_per_sec": 0, 00:09:07.280 "w_mbytes_per_sec": 0 00:09:07.280 }, 00:09:07.280 "claimed": false, 00:09:07.280 "zoned": false, 00:09:07.280 "supported_io_types": { 00:09:07.280 "read": true, 00:09:07.280 "write": true, 00:09:07.280 "unmap": true, 00:09:07.280 "flush": true, 00:09:07.280 "reset": true, 00:09:07.280 "nvme_admin": false, 00:09:07.280 "nvme_io": false, 00:09:07.280 "nvme_io_md": false, 00:09:07.280 "write_zeroes": true, 00:09:07.280 "zcopy": false, 00:09:07.280 "get_zone_info": false, 00:09:07.280 "zone_management": false, 00:09:07.280 "zone_append": false, 00:09:07.280 "compare": false, 00:09:07.280 "compare_and_write": false, 00:09:07.280 "abort": false, 00:09:07.280 "seek_hole": false, 00:09:07.280 "seek_data": false, 00:09:07.280 "copy": false, 00:09:07.280 "nvme_iov_md": false 00:09:07.280 }, 00:09:07.280 "memory_domains": [ 00:09:07.280 { 00:09:07.280 "dma_device_id": "system", 00:09:07.280 "dma_device_type": 1 00:09:07.280 }, 00:09:07.280 { 00:09:07.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.280 "dma_device_type": 2 00:09:07.280 }, 00:09:07.280 { 00:09:07.280 "dma_device_id": "system", 00:09:07.280 "dma_device_type": 1 00:09:07.280 }, 00:09:07.280 { 00:09:07.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.280 "dma_device_type": 2 00:09:07.280 }, 00:09:07.280 { 00:09:07.280 "dma_device_id": "system", 00:09:07.280 "dma_device_type": 1 00:09:07.280 }, 00:09:07.280 { 00:09:07.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.280 "dma_device_type": 2 00:09:07.280 } 00:09:07.280 ], 00:09:07.280 "driver_specific": { 00:09:07.280 "raid": { 00:09:07.280 "uuid": "0534c169-85c9-413b-99d6-9de414b1d5bb", 00:09:07.280 "strip_size_kb": 64, 00:09:07.280 "state": "online", 00:09:07.280 "raid_level": "concat", 00:09:07.280 "superblock": true, 00:09:07.280 "num_base_bdevs": 3, 00:09:07.280 "num_base_bdevs_discovered": 3, 00:09:07.280 "num_base_bdevs_operational": 3, 00:09:07.280 "base_bdevs_list": [ 00:09:07.280 { 00:09:07.280 "name": "pt1", 00:09:07.280 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:07.280 "is_configured": true, 00:09:07.280 "data_offset": 2048, 00:09:07.280 "data_size": 63488 00:09:07.280 }, 00:09:07.280 { 00:09:07.280 "name": "pt2", 00:09:07.280 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:07.280 "is_configured": true, 00:09:07.280 "data_offset": 2048, 00:09:07.280 "data_size": 63488 00:09:07.280 }, 00:09:07.280 { 00:09:07.280 "name": "pt3", 00:09:07.280 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:07.280 "is_configured": true, 00:09:07.280 "data_offset": 2048, 00:09:07.280 "data_size": 63488 00:09:07.280 } 00:09:07.280 ] 00:09:07.280 } 00:09:07.280 } 00:09:07.280 }' 00:09:07.280 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:07.280 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:07.280 pt2 00:09:07.280 pt3' 00:09:07.280 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.280 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:07.280 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.280 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:07.280 10:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.280 10:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.280 10:20:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.280 10:20:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.280 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.280 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.280 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.280 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:07.281 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.281 10:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.281 10:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.281 10:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.540 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.540 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.540 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:07.540 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:07.540 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:07.540 10:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.540 10:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.540 10:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.540 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:07.540 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:07.540 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:07.540 10:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.540 10:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.540 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:07.540 [2024-11-19 10:20:21.100575] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:07.540 10:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.540 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 0534c169-85c9-413b-99d6-9de414b1d5bb '!=' 0534c169-85c9-413b-99d6-9de414b1d5bb ']' 00:09:07.540 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:07.540 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:07.541 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:07.541 10:20:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66674 00:09:07.541 10:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66674 ']' 00:09:07.541 10:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66674 00:09:07.541 10:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:07.541 10:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.541 10:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66674 00:09:07.541 killing process with pid 66674 00:09:07.541 10:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.541 10:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.541 10:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66674' 00:09:07.541 10:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66674 00:09:07.541 [2024-11-19 10:20:21.185334] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:07.541 [2024-11-19 10:20:21.185415] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.541 [2024-11-19 10:20:21.185473] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:07.541 [2024-11-19 10:20:21.185484] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:07.541 10:20:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66674 00:09:07.800 [2024-11-19 10:20:21.467730] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:08.740 ************************************ 00:09:08.740 END TEST raid_superblock_test 00:09:08.740 ************************************ 00:09:08.740 10:20:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:08.740 00:09:08.740 real 0m4.970s 00:09:08.740 user 0m7.123s 00:09:08.740 sys 0m0.820s 00:09:08.740 10:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.740 10:20:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.000 10:20:22 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:09.001 10:20:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:09.001 10:20:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.001 10:20:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:09.001 ************************************ 00:09:09.001 START TEST raid_read_error_test 00:09:09.001 ************************************ 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.EXXA8g2UmG 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66926 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66926 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 66926 ']' 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.001 10:20:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.001 [2024-11-19 10:20:22.680061] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:09:09.001 [2024-11-19 10:20:22.680203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66926 ] 00:09:09.262 [2024-11-19 10:20:22.853055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.262 [2024-11-19 10:20:22.962229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.521 [2024-11-19 10:20:23.146677] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.521 [2024-11-19 10:20:23.146712] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.781 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:09.781 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:09.781 10:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:09.781 10:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:09.781 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.781 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.781 BaseBdev1_malloc 00:09:09.781 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.781 10:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:09.781 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.781 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.781 true 00:09:09.781 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.781 10:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:09.782 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.782 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.782 [2024-11-19 10:20:23.558764] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:09.782 [2024-11-19 10:20:23.558819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.782 [2024-11-19 10:20:23.558838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:09.782 [2024-11-19 10:20:23.558848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.042 [2024-11-19 10:20:23.560916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.043 [2024-11-19 10:20:23.560957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:10.043 BaseBdev1 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.043 BaseBdev2_malloc 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.043 true 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.043 [2024-11-19 10:20:23.625082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:10.043 [2024-11-19 10:20:23.625136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.043 [2024-11-19 10:20:23.625168] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:10.043 [2024-11-19 10:20:23.625178] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.043 [2024-11-19 10:20:23.627152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.043 [2024-11-19 10:20:23.627250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:10.043 BaseBdev2 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.043 BaseBdev3_malloc 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.043 true 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.043 [2024-11-19 10:20:23.694994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:10.043 [2024-11-19 10:20:23.695054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:10.043 [2024-11-19 10:20:23.695071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:10.043 [2024-11-19 10:20:23.695080] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:10.043 [2024-11-19 10:20:23.697177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:10.043 [2024-11-19 10:20:23.697249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:10.043 BaseBdev3 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.043 [2024-11-19 10:20:23.703067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:10.043 [2024-11-19 10:20:23.704767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.043 [2024-11-19 10:20:23.704845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:10.043 [2024-11-19 10:20:23.705042] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:10.043 [2024-11-19 10:20:23.705056] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:10.043 [2024-11-19 10:20:23.705288] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:10.043 [2024-11-19 10:20:23.705430] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:10.043 [2024-11-19 10:20:23.705448] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:10.043 [2024-11-19 10:20:23.705584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.043 10:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.043 "name": "raid_bdev1", 00:09:10.043 "uuid": "8fbd413e-5358-4fd5-883c-8ca32f4193e5", 00:09:10.043 "strip_size_kb": 64, 00:09:10.043 "state": "online", 00:09:10.043 "raid_level": "concat", 00:09:10.043 "superblock": true, 00:09:10.043 "num_base_bdevs": 3, 00:09:10.043 "num_base_bdevs_discovered": 3, 00:09:10.044 "num_base_bdevs_operational": 3, 00:09:10.044 "base_bdevs_list": [ 00:09:10.044 { 00:09:10.044 "name": "BaseBdev1", 00:09:10.044 "uuid": "622330e1-d6de-5a4b-bb9c-28b70d785a19", 00:09:10.044 "is_configured": true, 00:09:10.044 "data_offset": 2048, 00:09:10.044 "data_size": 63488 00:09:10.044 }, 00:09:10.044 { 00:09:10.044 "name": "BaseBdev2", 00:09:10.044 "uuid": "10a0d342-0b34-53bc-a525-3eda25a789c3", 00:09:10.044 "is_configured": true, 00:09:10.044 "data_offset": 2048, 00:09:10.044 "data_size": 63488 00:09:10.044 }, 00:09:10.044 { 00:09:10.044 "name": "BaseBdev3", 00:09:10.044 "uuid": "028d38ca-66a4-5c3b-86e5-7da17b942fee", 00:09:10.044 "is_configured": true, 00:09:10.044 "data_offset": 2048, 00:09:10.044 "data_size": 63488 00:09:10.044 } 00:09:10.044 ] 00:09:10.044 }' 00:09:10.044 10:20:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.044 10:20:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.614 10:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:10.614 10:20:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:10.614 [2024-11-19 10:20:24.211379] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:11.552 10:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:11.552 10:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.552 10:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.552 10:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.552 10:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:11.552 10:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:11.552 10:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:11.552 10:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:11.552 10:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:11.553 10:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.553 10:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:11.553 10:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.553 10:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.553 10:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.553 10:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.553 10:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.553 10:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.553 10:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.553 10:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.553 10:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.553 10:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.553 10:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.553 10:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.553 "name": "raid_bdev1", 00:09:11.553 "uuid": "8fbd413e-5358-4fd5-883c-8ca32f4193e5", 00:09:11.553 "strip_size_kb": 64, 00:09:11.553 "state": "online", 00:09:11.553 "raid_level": "concat", 00:09:11.553 "superblock": true, 00:09:11.553 "num_base_bdevs": 3, 00:09:11.553 "num_base_bdevs_discovered": 3, 00:09:11.553 "num_base_bdevs_operational": 3, 00:09:11.553 "base_bdevs_list": [ 00:09:11.553 { 00:09:11.553 "name": "BaseBdev1", 00:09:11.553 "uuid": "622330e1-d6de-5a4b-bb9c-28b70d785a19", 00:09:11.553 "is_configured": true, 00:09:11.553 "data_offset": 2048, 00:09:11.553 "data_size": 63488 00:09:11.553 }, 00:09:11.553 { 00:09:11.553 "name": "BaseBdev2", 00:09:11.553 "uuid": "10a0d342-0b34-53bc-a525-3eda25a789c3", 00:09:11.553 "is_configured": true, 00:09:11.553 "data_offset": 2048, 00:09:11.553 "data_size": 63488 00:09:11.553 }, 00:09:11.553 { 00:09:11.553 "name": "BaseBdev3", 00:09:11.553 "uuid": "028d38ca-66a4-5c3b-86e5-7da17b942fee", 00:09:11.553 "is_configured": true, 00:09:11.553 "data_offset": 2048, 00:09:11.553 "data_size": 63488 00:09:11.553 } 00:09:11.553 ] 00:09:11.553 }' 00:09:11.553 10:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.553 10:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.812 10:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:11.812 10:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.812 10:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.812 [2024-11-19 10:20:25.522616] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:11.812 [2024-11-19 10:20:25.522648] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:11.812 [2024-11-19 10:20:25.525245] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.812 [2024-11-19 10:20:25.525293] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.812 [2024-11-19 10:20:25.525329] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:11.812 [2024-11-19 10:20:25.525340] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:11.812 { 00:09:11.812 "results": [ 00:09:11.812 { 00:09:11.812 "job": "raid_bdev1", 00:09:11.812 "core_mask": "0x1", 00:09:11.812 "workload": "randrw", 00:09:11.812 "percentage": 50, 00:09:11.812 "status": "finished", 00:09:11.812 "queue_depth": 1, 00:09:11.812 "io_size": 131072, 00:09:11.812 "runtime": 1.31196, 00:09:11.812 "iops": 16857.98347510595, 00:09:11.812 "mibps": 2107.2479343882437, 00:09:11.812 "io_failed": 1, 00:09:11.812 "io_timeout": 0, 00:09:11.812 "avg_latency_us": 82.46470447709804, 00:09:11.812 "min_latency_us": 24.705676855895195, 00:09:11.812 "max_latency_us": 1366.5257641921398 00:09:11.812 } 00:09:11.812 ], 00:09:11.812 "core_count": 1 00:09:11.812 } 00:09:11.812 10:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.812 10:20:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66926 00:09:11.812 10:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 66926 ']' 00:09:11.812 10:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 66926 00:09:11.812 10:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:11.812 10:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:11.812 10:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66926 00:09:11.812 killing process with pid 66926 00:09:11.812 10:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:11.812 10:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:11.812 10:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66926' 00:09:11.812 10:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 66926 00:09:11.812 [2024-11-19 10:20:25.564663] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:11.812 10:20:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 66926 00:09:12.071 [2024-11-19 10:20:25.781198] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:13.452 10:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.EXXA8g2UmG 00:09:13.452 10:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:13.452 10:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:13.452 10:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:09:13.452 10:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:13.452 ************************************ 00:09:13.453 END TEST raid_read_error_test 00:09:13.453 ************************************ 00:09:13.453 10:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:13.453 10:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:13.453 10:20:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:09:13.453 00:09:13.453 real 0m4.321s 00:09:13.453 user 0m5.065s 00:09:13.453 sys 0m0.540s 00:09:13.453 10:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:13.453 10:20:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.453 10:20:26 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:13.453 10:20:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:13.453 10:20:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:13.453 10:20:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:13.453 ************************************ 00:09:13.453 START TEST raid_write_error_test 00:09:13.453 ************************************ 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.JyLO3oIitz 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67069 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67069 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67069 ']' 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:13.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:13.453 10:20:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.453 [2024-11-19 10:20:27.070510] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:09:13.453 [2024-11-19 10:20:27.070622] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67069 ] 00:09:13.713 [2024-11-19 10:20:27.243271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.713 [2024-11-19 10:20:27.358499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.973 [2024-11-19 10:20:27.552432] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.973 [2024-11-19 10:20:27.552570] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:14.233 10:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.233 10:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:14.233 10:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:14.233 10:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:14.233 10:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.233 10:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.233 BaseBdev1_malloc 00:09:14.233 10:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.233 10:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:14.233 10:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.233 10:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.233 true 00:09:14.233 10:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.233 10:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:14.233 10:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.233 10:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.233 [2024-11-19 10:20:27.944844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:14.233 [2024-11-19 10:20:27.944900] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.233 [2024-11-19 10:20:27.944934] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:14.233 [2024-11-19 10:20:27.944944] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.233 [2024-11-19 10:20:27.946946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.233 [2024-11-19 10:20:27.946985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:14.233 BaseBdev1 00:09:14.233 10:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.233 10:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:14.233 10:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:14.233 10:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.233 10:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.233 BaseBdev2_malloc 00:09:14.233 10:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.233 10:20:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:14.233 10:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.233 10:20:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.233 true 00:09:14.233 10:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.233 10:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:14.233 10:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.233 10:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.233 [2024-11-19 10:20:28.010571] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:14.233 [2024-11-19 10:20:28.010623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.233 [2024-11-19 10:20:28.010638] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:14.233 [2024-11-19 10:20:28.010647] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.493 [2024-11-19 10:20:28.012614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.493 [2024-11-19 10:20:28.012651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:14.493 BaseBdev2 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.493 BaseBdev3_malloc 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.493 true 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.493 [2024-11-19 10:20:28.086427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:14.493 [2024-11-19 10:20:28.086478] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.493 [2024-11-19 10:20:28.086493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:14.493 [2024-11-19 10:20:28.086503] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.493 [2024-11-19 10:20:28.088472] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.493 [2024-11-19 10:20:28.088557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:14.493 BaseBdev3 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.493 [2024-11-19 10:20:28.098480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.493 [2024-11-19 10:20:28.100199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:14.493 [2024-11-19 10:20:28.100274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:14.493 [2024-11-19 10:20:28.100452] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:14.493 [2024-11-19 10:20:28.100464] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:14.493 [2024-11-19 10:20:28.100726] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:14.493 [2024-11-19 10:20:28.100869] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:14.493 [2024-11-19 10:20:28.100881] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:14.493 [2024-11-19 10:20:28.101020] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.493 10:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.493 "name": "raid_bdev1", 00:09:14.493 "uuid": "da0ad4f5-791d-434f-aee9-fb4865d6e69d", 00:09:14.493 "strip_size_kb": 64, 00:09:14.493 "state": "online", 00:09:14.493 "raid_level": "concat", 00:09:14.493 "superblock": true, 00:09:14.493 "num_base_bdevs": 3, 00:09:14.493 "num_base_bdevs_discovered": 3, 00:09:14.493 "num_base_bdevs_operational": 3, 00:09:14.493 "base_bdevs_list": [ 00:09:14.493 { 00:09:14.493 "name": "BaseBdev1", 00:09:14.493 "uuid": "9afb2acb-19f2-5310-8085-3d7991697481", 00:09:14.493 "is_configured": true, 00:09:14.493 "data_offset": 2048, 00:09:14.493 "data_size": 63488 00:09:14.493 }, 00:09:14.493 { 00:09:14.493 "name": "BaseBdev2", 00:09:14.493 "uuid": "84129b21-f2ea-5150-b53d-df8186bd7ee9", 00:09:14.493 "is_configured": true, 00:09:14.493 "data_offset": 2048, 00:09:14.493 "data_size": 63488 00:09:14.493 }, 00:09:14.493 { 00:09:14.493 "name": "BaseBdev3", 00:09:14.493 "uuid": "7dada654-b18a-50a1-8e93-fc659e7cd64d", 00:09:14.493 "is_configured": true, 00:09:14.494 "data_offset": 2048, 00:09:14.494 "data_size": 63488 00:09:14.494 } 00:09:14.494 ] 00:09:14.494 }' 00:09:14.494 10:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.494 10:20:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.753 10:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:14.753 10:20:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:15.013 [2024-11-19 10:20:28.622868] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:15.950 10:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:15.950 10:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.950 10:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.950 10:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.950 10:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:15.950 10:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:15.950 10:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:15.950 10:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:15.950 10:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:15.950 10:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:15.950 10:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.950 10:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.950 10:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.950 10:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.950 10:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.950 10:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.950 10:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.950 10:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.950 10:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:15.950 10:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.950 10:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.950 10:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.950 10:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.950 "name": "raid_bdev1", 00:09:15.950 "uuid": "da0ad4f5-791d-434f-aee9-fb4865d6e69d", 00:09:15.950 "strip_size_kb": 64, 00:09:15.950 "state": "online", 00:09:15.950 "raid_level": "concat", 00:09:15.950 "superblock": true, 00:09:15.950 "num_base_bdevs": 3, 00:09:15.950 "num_base_bdevs_discovered": 3, 00:09:15.950 "num_base_bdevs_operational": 3, 00:09:15.950 "base_bdevs_list": [ 00:09:15.950 { 00:09:15.950 "name": "BaseBdev1", 00:09:15.950 "uuid": "9afb2acb-19f2-5310-8085-3d7991697481", 00:09:15.950 "is_configured": true, 00:09:15.950 "data_offset": 2048, 00:09:15.950 "data_size": 63488 00:09:15.950 }, 00:09:15.950 { 00:09:15.950 "name": "BaseBdev2", 00:09:15.950 "uuid": "84129b21-f2ea-5150-b53d-df8186bd7ee9", 00:09:15.950 "is_configured": true, 00:09:15.950 "data_offset": 2048, 00:09:15.950 "data_size": 63488 00:09:15.950 }, 00:09:15.950 { 00:09:15.950 "name": "BaseBdev3", 00:09:15.950 "uuid": "7dada654-b18a-50a1-8e93-fc659e7cd64d", 00:09:15.950 "is_configured": true, 00:09:15.950 "data_offset": 2048, 00:09:15.950 "data_size": 63488 00:09:15.950 } 00:09:15.950 ] 00:09:15.950 }' 00:09:15.950 10:20:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.950 10:20:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.519 10:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:16.519 10:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.519 10:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.519 [2024-11-19 10:20:30.014862] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:16.519 [2024-11-19 10:20:30.014893] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:16.519 [2024-11-19 10:20:30.017589] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:16.519 [2024-11-19 10:20:30.017665] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:16.519 [2024-11-19 10:20:30.017717] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:16.520 [2024-11-19 10:20:30.017761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:16.520 { 00:09:16.520 "results": [ 00:09:16.520 { 00:09:16.520 "job": "raid_bdev1", 00:09:16.520 "core_mask": "0x1", 00:09:16.520 "workload": "randrw", 00:09:16.520 "percentage": 50, 00:09:16.520 "status": "finished", 00:09:16.520 "queue_depth": 1, 00:09:16.520 "io_size": 131072, 00:09:16.520 "runtime": 1.392824, 00:09:16.520 "iops": 16467.981597100566, 00:09:16.520 "mibps": 2058.497699637571, 00:09:16.520 "io_failed": 1, 00:09:16.520 "io_timeout": 0, 00:09:16.520 "avg_latency_us": 84.29131850010718, 00:09:16.520 "min_latency_us": 24.258515283842794, 00:09:16.520 "max_latency_us": 1466.6899563318777 00:09:16.520 } 00:09:16.520 ], 00:09:16.520 "core_count": 1 00:09:16.520 } 00:09:16.520 10:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.520 10:20:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67069 00:09:16.520 10:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67069 ']' 00:09:16.520 10:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67069 00:09:16.520 10:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:16.520 10:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.520 10:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67069 00:09:16.520 10:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.520 10:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.520 killing process with pid 67069 00:09:16.520 10:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67069' 00:09:16.520 10:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67069 00:09:16.520 [2024-11-19 10:20:30.063429] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:16.520 10:20:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67069 00:09:16.520 [2024-11-19 10:20:30.283392] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:17.904 10:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.JyLO3oIitz 00:09:17.904 10:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:17.904 10:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:17.904 10:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:17.904 10:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:17.904 10:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:17.904 10:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:17.904 ************************************ 00:09:17.904 END TEST raid_write_error_test 00:09:17.904 ************************************ 00:09:17.904 10:20:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:17.904 00:09:17.904 real 0m4.449s 00:09:17.904 user 0m5.321s 00:09:17.904 sys 0m0.524s 00:09:17.904 10:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.904 10:20:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.904 10:20:31 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:17.904 10:20:31 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:17.904 10:20:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:17.904 10:20:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.904 10:20:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:17.904 ************************************ 00:09:17.904 START TEST raid_state_function_test 00:09:17.904 ************************************ 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67207 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67207' 00:09:17.904 Process raid pid: 67207 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67207 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67207 ']' 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.904 10:20:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.904 [2024-11-19 10:20:31.584077] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:09:17.904 [2024-11-19 10:20:31.584270] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.165 [2024-11-19 10:20:31.758667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.165 [2024-11-19 10:20:31.869710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.425 [2024-11-19 10:20:32.073122] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.425 [2024-11-19 10:20:32.073241] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.685 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.685 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:18.685 10:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:18.685 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.685 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.685 [2024-11-19 10:20:32.405875] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:18.685 [2024-11-19 10:20:32.405982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:18.685 [2024-11-19 10:20:32.406024] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:18.685 [2024-11-19 10:20:32.406048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:18.685 [2024-11-19 10:20:32.406066] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:18.685 [2024-11-19 10:20:32.406102] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:18.685 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.685 10:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:18.685 10:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.685 10:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.685 10:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:18.685 10:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:18.685 10:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.685 10:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.685 10:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.685 10:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.685 10:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.685 10:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.685 10:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.685 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.685 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.685 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.685 10:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.685 "name": "Existed_Raid", 00:09:18.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.685 "strip_size_kb": 0, 00:09:18.685 "state": "configuring", 00:09:18.685 "raid_level": "raid1", 00:09:18.685 "superblock": false, 00:09:18.685 "num_base_bdevs": 3, 00:09:18.685 "num_base_bdevs_discovered": 0, 00:09:18.685 "num_base_bdevs_operational": 3, 00:09:18.685 "base_bdevs_list": [ 00:09:18.685 { 00:09:18.685 "name": "BaseBdev1", 00:09:18.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.685 "is_configured": false, 00:09:18.685 "data_offset": 0, 00:09:18.685 "data_size": 0 00:09:18.685 }, 00:09:18.685 { 00:09:18.685 "name": "BaseBdev2", 00:09:18.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.685 "is_configured": false, 00:09:18.685 "data_offset": 0, 00:09:18.685 "data_size": 0 00:09:18.685 }, 00:09:18.685 { 00:09:18.685 "name": "BaseBdev3", 00:09:18.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.685 "is_configured": false, 00:09:18.685 "data_offset": 0, 00:09:18.685 "data_size": 0 00:09:18.685 } 00:09:18.685 ] 00:09:18.685 }' 00:09:18.685 10:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.685 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.255 10:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:19.255 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.255 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.255 [2024-11-19 10:20:32.865051] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:19.255 [2024-11-19 10:20:32.865130] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:19.255 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.255 10:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:19.255 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.255 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.255 [2024-11-19 10:20:32.877016] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:19.255 [2024-11-19 10:20:32.877093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:19.255 [2024-11-19 10:20:32.877120] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:19.255 [2024-11-19 10:20:32.877142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:19.255 [2024-11-19 10:20:32.877160] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:19.255 [2024-11-19 10:20:32.877179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:19.255 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.255 10:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:19.255 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.255 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.255 [2024-11-19 10:20:32.922217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:19.255 BaseBdev1 00:09:19.255 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.255 10:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:19.255 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:19.255 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:19.255 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:19.255 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:19.255 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:19.255 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:19.255 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.255 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.255 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.255 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:19.255 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.255 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.255 [ 00:09:19.255 { 00:09:19.255 "name": "BaseBdev1", 00:09:19.255 "aliases": [ 00:09:19.255 "4e4802ce-f394-4bae-b5e1-2921118dbd44" 00:09:19.255 ], 00:09:19.255 "product_name": "Malloc disk", 00:09:19.255 "block_size": 512, 00:09:19.255 "num_blocks": 65536, 00:09:19.255 "uuid": "4e4802ce-f394-4bae-b5e1-2921118dbd44", 00:09:19.255 "assigned_rate_limits": { 00:09:19.255 "rw_ios_per_sec": 0, 00:09:19.255 "rw_mbytes_per_sec": 0, 00:09:19.255 "r_mbytes_per_sec": 0, 00:09:19.255 "w_mbytes_per_sec": 0 00:09:19.255 }, 00:09:19.255 "claimed": true, 00:09:19.255 "claim_type": "exclusive_write", 00:09:19.255 "zoned": false, 00:09:19.255 "supported_io_types": { 00:09:19.255 "read": true, 00:09:19.255 "write": true, 00:09:19.255 "unmap": true, 00:09:19.255 "flush": true, 00:09:19.255 "reset": true, 00:09:19.255 "nvme_admin": false, 00:09:19.255 "nvme_io": false, 00:09:19.255 "nvme_io_md": false, 00:09:19.255 "write_zeroes": true, 00:09:19.255 "zcopy": true, 00:09:19.255 "get_zone_info": false, 00:09:19.255 "zone_management": false, 00:09:19.255 "zone_append": false, 00:09:19.255 "compare": false, 00:09:19.255 "compare_and_write": false, 00:09:19.255 "abort": true, 00:09:19.255 "seek_hole": false, 00:09:19.255 "seek_data": false, 00:09:19.255 "copy": true, 00:09:19.255 "nvme_iov_md": false 00:09:19.256 }, 00:09:19.256 "memory_domains": [ 00:09:19.256 { 00:09:19.256 "dma_device_id": "system", 00:09:19.256 "dma_device_type": 1 00:09:19.256 }, 00:09:19.256 { 00:09:19.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.256 "dma_device_type": 2 00:09:19.256 } 00:09:19.256 ], 00:09:19.256 "driver_specific": {} 00:09:19.256 } 00:09:19.256 ] 00:09:19.256 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.256 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:19.256 10:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:19.256 10:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.256 10:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.256 10:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.256 10:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.256 10:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.256 10:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.256 10:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.256 10:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.256 10:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.256 10:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.256 10:20:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.256 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.256 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.256 10:20:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.256 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.256 "name": "Existed_Raid", 00:09:19.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.256 "strip_size_kb": 0, 00:09:19.256 "state": "configuring", 00:09:19.256 "raid_level": "raid1", 00:09:19.256 "superblock": false, 00:09:19.256 "num_base_bdevs": 3, 00:09:19.256 "num_base_bdevs_discovered": 1, 00:09:19.256 "num_base_bdevs_operational": 3, 00:09:19.256 "base_bdevs_list": [ 00:09:19.256 { 00:09:19.256 "name": "BaseBdev1", 00:09:19.256 "uuid": "4e4802ce-f394-4bae-b5e1-2921118dbd44", 00:09:19.256 "is_configured": true, 00:09:19.256 "data_offset": 0, 00:09:19.256 "data_size": 65536 00:09:19.256 }, 00:09:19.256 { 00:09:19.256 "name": "BaseBdev2", 00:09:19.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.256 "is_configured": false, 00:09:19.256 "data_offset": 0, 00:09:19.256 "data_size": 0 00:09:19.256 }, 00:09:19.256 { 00:09:19.256 "name": "BaseBdev3", 00:09:19.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.256 "is_configured": false, 00:09:19.256 "data_offset": 0, 00:09:19.256 "data_size": 0 00:09:19.256 } 00:09:19.256 ] 00:09:19.256 }' 00:09:19.256 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.256 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.827 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:19.827 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.827 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.827 [2024-11-19 10:20:33.393435] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:19.827 [2024-11-19 10:20:33.393482] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:19.827 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.827 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:19.827 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.827 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.827 [2024-11-19 10:20:33.405453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:19.827 [2024-11-19 10:20:33.407198] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:19.827 [2024-11-19 10:20:33.407286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:19.827 [2024-11-19 10:20:33.407301] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:19.827 [2024-11-19 10:20:33.407311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:19.827 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.827 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:19.827 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:19.827 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:19.827 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.827 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.827 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:19.827 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:19.827 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.827 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.827 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.827 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.827 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.827 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.827 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.827 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.827 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.827 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.827 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.827 "name": "Existed_Raid", 00:09:19.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.827 "strip_size_kb": 0, 00:09:19.827 "state": "configuring", 00:09:19.827 "raid_level": "raid1", 00:09:19.827 "superblock": false, 00:09:19.827 "num_base_bdevs": 3, 00:09:19.827 "num_base_bdevs_discovered": 1, 00:09:19.827 "num_base_bdevs_operational": 3, 00:09:19.827 "base_bdevs_list": [ 00:09:19.827 { 00:09:19.827 "name": "BaseBdev1", 00:09:19.827 "uuid": "4e4802ce-f394-4bae-b5e1-2921118dbd44", 00:09:19.827 "is_configured": true, 00:09:19.827 "data_offset": 0, 00:09:19.827 "data_size": 65536 00:09:19.827 }, 00:09:19.827 { 00:09:19.827 "name": "BaseBdev2", 00:09:19.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.827 "is_configured": false, 00:09:19.827 "data_offset": 0, 00:09:19.827 "data_size": 0 00:09:19.827 }, 00:09:19.827 { 00:09:19.827 "name": "BaseBdev3", 00:09:19.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.827 "is_configured": false, 00:09:19.827 "data_offset": 0, 00:09:19.827 "data_size": 0 00:09:19.827 } 00:09:19.827 ] 00:09:19.827 }' 00:09:19.827 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.827 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.088 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:20.088 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.088 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.088 [2024-11-19 10:20:33.850036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:20.088 BaseBdev2 00:09:20.088 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.088 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:20.088 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:20.088 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:20.088 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:20.088 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:20.088 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:20.088 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:20.088 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.088 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.088 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.348 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:20.348 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.348 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.348 [ 00:09:20.348 { 00:09:20.348 "name": "BaseBdev2", 00:09:20.348 "aliases": [ 00:09:20.348 "ee68dcdb-b460-4bd0-a3b1-db04fa7b2efa" 00:09:20.348 ], 00:09:20.348 "product_name": "Malloc disk", 00:09:20.348 "block_size": 512, 00:09:20.348 "num_blocks": 65536, 00:09:20.348 "uuid": "ee68dcdb-b460-4bd0-a3b1-db04fa7b2efa", 00:09:20.348 "assigned_rate_limits": { 00:09:20.348 "rw_ios_per_sec": 0, 00:09:20.348 "rw_mbytes_per_sec": 0, 00:09:20.348 "r_mbytes_per_sec": 0, 00:09:20.348 "w_mbytes_per_sec": 0 00:09:20.348 }, 00:09:20.348 "claimed": true, 00:09:20.348 "claim_type": "exclusive_write", 00:09:20.348 "zoned": false, 00:09:20.348 "supported_io_types": { 00:09:20.348 "read": true, 00:09:20.348 "write": true, 00:09:20.348 "unmap": true, 00:09:20.348 "flush": true, 00:09:20.348 "reset": true, 00:09:20.348 "nvme_admin": false, 00:09:20.348 "nvme_io": false, 00:09:20.348 "nvme_io_md": false, 00:09:20.348 "write_zeroes": true, 00:09:20.348 "zcopy": true, 00:09:20.348 "get_zone_info": false, 00:09:20.348 "zone_management": false, 00:09:20.348 "zone_append": false, 00:09:20.348 "compare": false, 00:09:20.348 "compare_and_write": false, 00:09:20.348 "abort": true, 00:09:20.348 "seek_hole": false, 00:09:20.348 "seek_data": false, 00:09:20.348 "copy": true, 00:09:20.348 "nvme_iov_md": false 00:09:20.348 }, 00:09:20.348 "memory_domains": [ 00:09:20.348 { 00:09:20.348 "dma_device_id": "system", 00:09:20.348 "dma_device_type": 1 00:09:20.348 }, 00:09:20.348 { 00:09:20.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.348 "dma_device_type": 2 00:09:20.348 } 00:09:20.348 ], 00:09:20.348 "driver_specific": {} 00:09:20.348 } 00:09:20.348 ] 00:09:20.348 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.348 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:20.348 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:20.348 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:20.348 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:20.348 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.349 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.349 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.349 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.349 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.349 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.349 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.349 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.349 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.349 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.349 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.349 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.349 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.349 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.349 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.349 "name": "Existed_Raid", 00:09:20.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.349 "strip_size_kb": 0, 00:09:20.349 "state": "configuring", 00:09:20.349 "raid_level": "raid1", 00:09:20.349 "superblock": false, 00:09:20.349 "num_base_bdevs": 3, 00:09:20.349 "num_base_bdevs_discovered": 2, 00:09:20.349 "num_base_bdevs_operational": 3, 00:09:20.349 "base_bdevs_list": [ 00:09:20.349 { 00:09:20.349 "name": "BaseBdev1", 00:09:20.349 "uuid": "4e4802ce-f394-4bae-b5e1-2921118dbd44", 00:09:20.349 "is_configured": true, 00:09:20.349 "data_offset": 0, 00:09:20.349 "data_size": 65536 00:09:20.349 }, 00:09:20.349 { 00:09:20.349 "name": "BaseBdev2", 00:09:20.349 "uuid": "ee68dcdb-b460-4bd0-a3b1-db04fa7b2efa", 00:09:20.349 "is_configured": true, 00:09:20.349 "data_offset": 0, 00:09:20.349 "data_size": 65536 00:09:20.349 }, 00:09:20.349 { 00:09:20.349 "name": "BaseBdev3", 00:09:20.349 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.349 "is_configured": false, 00:09:20.349 "data_offset": 0, 00:09:20.349 "data_size": 0 00:09:20.349 } 00:09:20.349 ] 00:09:20.349 }' 00:09:20.349 10:20:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.349 10:20:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.609 10:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:20.609 10:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.609 10:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.609 [2024-11-19 10:20:34.376691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:20.609 [2024-11-19 10:20:34.376736] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:20.609 [2024-11-19 10:20:34.376747] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:20.609 [2024-11-19 10:20:34.377047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:20.609 [2024-11-19 10:20:34.377244] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:20.609 [2024-11-19 10:20:34.377253] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:20.609 [2024-11-19 10:20:34.377521] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.609 BaseBdev3 00:09:20.609 10:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.609 10:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:20.609 10:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:20.609 10:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:20.609 10:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:20.609 10:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:20.609 10:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:20.609 10:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:20.609 10:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.609 10:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.870 10:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.870 10:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:20.870 10:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.870 10:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.870 [ 00:09:20.870 { 00:09:20.870 "name": "BaseBdev3", 00:09:20.870 "aliases": [ 00:09:20.870 "617d0961-f2bc-4cc0-9ac2-6b64b27f36b5" 00:09:20.870 ], 00:09:20.870 "product_name": "Malloc disk", 00:09:20.870 "block_size": 512, 00:09:20.870 "num_blocks": 65536, 00:09:20.870 "uuid": "617d0961-f2bc-4cc0-9ac2-6b64b27f36b5", 00:09:20.870 "assigned_rate_limits": { 00:09:20.870 "rw_ios_per_sec": 0, 00:09:20.870 "rw_mbytes_per_sec": 0, 00:09:20.870 "r_mbytes_per_sec": 0, 00:09:20.870 "w_mbytes_per_sec": 0 00:09:20.870 }, 00:09:20.870 "claimed": true, 00:09:20.870 "claim_type": "exclusive_write", 00:09:20.870 "zoned": false, 00:09:20.870 "supported_io_types": { 00:09:20.870 "read": true, 00:09:20.870 "write": true, 00:09:20.870 "unmap": true, 00:09:20.870 "flush": true, 00:09:20.870 "reset": true, 00:09:20.870 "nvme_admin": false, 00:09:20.870 "nvme_io": false, 00:09:20.870 "nvme_io_md": false, 00:09:20.870 "write_zeroes": true, 00:09:20.870 "zcopy": true, 00:09:20.870 "get_zone_info": false, 00:09:20.870 "zone_management": false, 00:09:20.870 "zone_append": false, 00:09:20.870 "compare": false, 00:09:20.870 "compare_and_write": false, 00:09:20.870 "abort": true, 00:09:20.870 "seek_hole": false, 00:09:20.870 "seek_data": false, 00:09:20.870 "copy": true, 00:09:20.870 "nvme_iov_md": false 00:09:20.870 }, 00:09:20.870 "memory_domains": [ 00:09:20.870 { 00:09:20.870 "dma_device_id": "system", 00:09:20.870 "dma_device_type": 1 00:09:20.870 }, 00:09:20.870 { 00:09:20.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.870 "dma_device_type": 2 00:09:20.870 } 00:09:20.870 ], 00:09:20.870 "driver_specific": {} 00:09:20.870 } 00:09:20.870 ] 00:09:20.870 10:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.870 10:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:20.870 10:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:20.870 10:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:20.870 10:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:20.870 10:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.870 10:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:20.870 10:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.870 10:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.870 10:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.870 10:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.870 10:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.870 10:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.870 10:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.870 10:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.870 10:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.870 10:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.870 10:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.870 10:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.870 10:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.870 "name": "Existed_Raid", 00:09:20.870 "uuid": "7a82681e-3276-4679-ac98-b057ac43a783", 00:09:20.870 "strip_size_kb": 0, 00:09:20.870 "state": "online", 00:09:20.870 "raid_level": "raid1", 00:09:20.870 "superblock": false, 00:09:20.870 "num_base_bdevs": 3, 00:09:20.870 "num_base_bdevs_discovered": 3, 00:09:20.870 "num_base_bdevs_operational": 3, 00:09:20.870 "base_bdevs_list": [ 00:09:20.870 { 00:09:20.870 "name": "BaseBdev1", 00:09:20.870 "uuid": "4e4802ce-f394-4bae-b5e1-2921118dbd44", 00:09:20.870 "is_configured": true, 00:09:20.870 "data_offset": 0, 00:09:20.870 "data_size": 65536 00:09:20.870 }, 00:09:20.870 { 00:09:20.870 "name": "BaseBdev2", 00:09:20.870 "uuid": "ee68dcdb-b460-4bd0-a3b1-db04fa7b2efa", 00:09:20.870 "is_configured": true, 00:09:20.870 "data_offset": 0, 00:09:20.870 "data_size": 65536 00:09:20.870 }, 00:09:20.870 { 00:09:20.870 "name": "BaseBdev3", 00:09:20.870 "uuid": "617d0961-f2bc-4cc0-9ac2-6b64b27f36b5", 00:09:20.870 "is_configured": true, 00:09:20.870 "data_offset": 0, 00:09:20.870 "data_size": 65536 00:09:20.870 } 00:09:20.870 ] 00:09:20.870 }' 00:09:20.870 10:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.870 10:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.133 10:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:21.133 10:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:21.133 10:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:21.133 10:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:21.133 10:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:21.133 10:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:21.133 10:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:21.133 10:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:21.133 10:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.133 10:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.133 [2024-11-19 10:20:34.880230] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:21.133 10:20:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.414 10:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:21.414 "name": "Existed_Raid", 00:09:21.414 "aliases": [ 00:09:21.414 "7a82681e-3276-4679-ac98-b057ac43a783" 00:09:21.414 ], 00:09:21.414 "product_name": "Raid Volume", 00:09:21.414 "block_size": 512, 00:09:21.414 "num_blocks": 65536, 00:09:21.414 "uuid": "7a82681e-3276-4679-ac98-b057ac43a783", 00:09:21.414 "assigned_rate_limits": { 00:09:21.414 "rw_ios_per_sec": 0, 00:09:21.414 "rw_mbytes_per_sec": 0, 00:09:21.414 "r_mbytes_per_sec": 0, 00:09:21.414 "w_mbytes_per_sec": 0 00:09:21.414 }, 00:09:21.414 "claimed": false, 00:09:21.414 "zoned": false, 00:09:21.414 "supported_io_types": { 00:09:21.414 "read": true, 00:09:21.414 "write": true, 00:09:21.414 "unmap": false, 00:09:21.414 "flush": false, 00:09:21.414 "reset": true, 00:09:21.414 "nvme_admin": false, 00:09:21.414 "nvme_io": false, 00:09:21.414 "nvme_io_md": false, 00:09:21.414 "write_zeroes": true, 00:09:21.414 "zcopy": false, 00:09:21.414 "get_zone_info": false, 00:09:21.414 "zone_management": false, 00:09:21.414 "zone_append": false, 00:09:21.414 "compare": false, 00:09:21.414 "compare_and_write": false, 00:09:21.414 "abort": false, 00:09:21.414 "seek_hole": false, 00:09:21.414 "seek_data": false, 00:09:21.414 "copy": false, 00:09:21.414 "nvme_iov_md": false 00:09:21.414 }, 00:09:21.414 "memory_domains": [ 00:09:21.414 { 00:09:21.414 "dma_device_id": "system", 00:09:21.414 "dma_device_type": 1 00:09:21.414 }, 00:09:21.414 { 00:09:21.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.414 "dma_device_type": 2 00:09:21.414 }, 00:09:21.414 { 00:09:21.414 "dma_device_id": "system", 00:09:21.414 "dma_device_type": 1 00:09:21.414 }, 00:09:21.414 { 00:09:21.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.414 "dma_device_type": 2 00:09:21.414 }, 00:09:21.414 { 00:09:21.414 "dma_device_id": "system", 00:09:21.414 "dma_device_type": 1 00:09:21.414 }, 00:09:21.414 { 00:09:21.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.414 "dma_device_type": 2 00:09:21.414 } 00:09:21.414 ], 00:09:21.414 "driver_specific": { 00:09:21.414 "raid": { 00:09:21.414 "uuid": "7a82681e-3276-4679-ac98-b057ac43a783", 00:09:21.414 "strip_size_kb": 0, 00:09:21.414 "state": "online", 00:09:21.414 "raid_level": "raid1", 00:09:21.414 "superblock": false, 00:09:21.414 "num_base_bdevs": 3, 00:09:21.414 "num_base_bdevs_discovered": 3, 00:09:21.414 "num_base_bdevs_operational": 3, 00:09:21.414 "base_bdevs_list": [ 00:09:21.414 { 00:09:21.414 "name": "BaseBdev1", 00:09:21.414 "uuid": "4e4802ce-f394-4bae-b5e1-2921118dbd44", 00:09:21.414 "is_configured": true, 00:09:21.414 "data_offset": 0, 00:09:21.414 "data_size": 65536 00:09:21.414 }, 00:09:21.414 { 00:09:21.414 "name": "BaseBdev2", 00:09:21.414 "uuid": "ee68dcdb-b460-4bd0-a3b1-db04fa7b2efa", 00:09:21.414 "is_configured": true, 00:09:21.414 "data_offset": 0, 00:09:21.414 "data_size": 65536 00:09:21.414 }, 00:09:21.414 { 00:09:21.414 "name": "BaseBdev3", 00:09:21.414 "uuid": "617d0961-f2bc-4cc0-9ac2-6b64b27f36b5", 00:09:21.414 "is_configured": true, 00:09:21.414 "data_offset": 0, 00:09:21.414 "data_size": 65536 00:09:21.414 } 00:09:21.414 ] 00:09:21.414 } 00:09:21.414 } 00:09:21.414 }' 00:09:21.414 10:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:21.414 10:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:21.414 BaseBdev2 00:09:21.414 BaseBdev3' 00:09:21.414 10:20:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.414 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:21.414 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.414 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:21.414 10:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.414 10:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.414 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.415 10:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.415 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.415 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.415 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.415 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:21.415 10:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.415 10:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.415 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.415 10:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.415 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.415 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.415 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:21.415 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:21.415 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:21.415 10:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.415 10:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.415 10:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.415 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:21.415 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:21.415 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:21.415 10:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.415 10:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.415 [2024-11-19 10:20:35.155505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:21.675 10:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.675 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:21.675 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:21.675 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:21.675 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:21.675 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:21.675 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:21.675 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.675 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:21.675 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:21.675 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:21.675 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:21.675 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.675 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.675 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.675 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.675 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.675 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.675 10:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.675 10:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.675 10:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.675 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.675 "name": "Existed_Raid", 00:09:21.675 "uuid": "7a82681e-3276-4679-ac98-b057ac43a783", 00:09:21.675 "strip_size_kb": 0, 00:09:21.675 "state": "online", 00:09:21.675 "raid_level": "raid1", 00:09:21.675 "superblock": false, 00:09:21.675 "num_base_bdevs": 3, 00:09:21.675 "num_base_bdevs_discovered": 2, 00:09:21.675 "num_base_bdevs_operational": 2, 00:09:21.675 "base_bdevs_list": [ 00:09:21.675 { 00:09:21.675 "name": null, 00:09:21.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.675 "is_configured": false, 00:09:21.675 "data_offset": 0, 00:09:21.675 "data_size": 65536 00:09:21.675 }, 00:09:21.675 { 00:09:21.675 "name": "BaseBdev2", 00:09:21.675 "uuid": "ee68dcdb-b460-4bd0-a3b1-db04fa7b2efa", 00:09:21.675 "is_configured": true, 00:09:21.675 "data_offset": 0, 00:09:21.675 "data_size": 65536 00:09:21.675 }, 00:09:21.675 { 00:09:21.675 "name": "BaseBdev3", 00:09:21.675 "uuid": "617d0961-f2bc-4cc0-9ac2-6b64b27f36b5", 00:09:21.675 "is_configured": true, 00:09:21.675 "data_offset": 0, 00:09:21.675 "data_size": 65536 00:09:21.675 } 00:09:21.675 ] 00:09:21.675 }' 00:09:21.675 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.675 10:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.935 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:21.935 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:21.935 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.935 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:21.935 10:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.936 10:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.936 10:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.936 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:21.936 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:21.936 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:21.936 10:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.936 10:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.936 [2024-11-19 10:20:35.707349] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:22.195 10:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.195 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:22.195 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:22.196 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.196 10:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.196 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:22.196 10:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.196 10:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.196 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:22.196 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:22.196 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:22.196 10:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.196 10:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.196 [2024-11-19 10:20:35.853603] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:22.196 [2024-11-19 10:20:35.853760] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:22.196 [2024-11-19 10:20:35.947653] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:22.196 [2024-11-19 10:20:35.947700] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:22.196 [2024-11-19 10:20:35.947712] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:22.196 10:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.196 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:22.196 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:22.196 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.196 10:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.196 10:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.196 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:22.196 10:20:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.456 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:22.456 10:20:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:22.456 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:22.456 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:22.456 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:22.456 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:22.456 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.456 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.456 BaseBdev2 00:09:22.456 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.456 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:22.456 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:22.456 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:22.456 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:22.456 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:22.456 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:22.456 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:22.456 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.456 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.456 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.456 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:22.456 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.456 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.456 [ 00:09:22.456 { 00:09:22.456 "name": "BaseBdev2", 00:09:22.456 "aliases": [ 00:09:22.456 "0ba60195-a0f2-4e15-a9c7-68b3a4aface8" 00:09:22.456 ], 00:09:22.456 "product_name": "Malloc disk", 00:09:22.456 "block_size": 512, 00:09:22.456 "num_blocks": 65536, 00:09:22.456 "uuid": "0ba60195-a0f2-4e15-a9c7-68b3a4aface8", 00:09:22.456 "assigned_rate_limits": { 00:09:22.456 "rw_ios_per_sec": 0, 00:09:22.456 "rw_mbytes_per_sec": 0, 00:09:22.456 "r_mbytes_per_sec": 0, 00:09:22.456 "w_mbytes_per_sec": 0 00:09:22.456 }, 00:09:22.456 "claimed": false, 00:09:22.456 "zoned": false, 00:09:22.456 "supported_io_types": { 00:09:22.456 "read": true, 00:09:22.456 "write": true, 00:09:22.456 "unmap": true, 00:09:22.456 "flush": true, 00:09:22.456 "reset": true, 00:09:22.456 "nvme_admin": false, 00:09:22.456 "nvme_io": false, 00:09:22.456 "nvme_io_md": false, 00:09:22.456 "write_zeroes": true, 00:09:22.456 "zcopy": true, 00:09:22.456 "get_zone_info": false, 00:09:22.456 "zone_management": false, 00:09:22.456 "zone_append": false, 00:09:22.456 "compare": false, 00:09:22.456 "compare_and_write": false, 00:09:22.456 "abort": true, 00:09:22.456 "seek_hole": false, 00:09:22.456 "seek_data": false, 00:09:22.456 "copy": true, 00:09:22.456 "nvme_iov_md": false 00:09:22.456 }, 00:09:22.456 "memory_domains": [ 00:09:22.456 { 00:09:22.456 "dma_device_id": "system", 00:09:22.456 "dma_device_type": 1 00:09:22.456 }, 00:09:22.456 { 00:09:22.456 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.456 "dma_device_type": 2 00:09:22.456 } 00:09:22.456 ], 00:09:22.456 "driver_specific": {} 00:09:22.456 } 00:09:22.456 ] 00:09:22.456 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.456 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:22.456 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:22.456 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:22.456 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:22.456 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.456 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.456 BaseBdev3 00:09:22.456 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.457 [ 00:09:22.457 { 00:09:22.457 "name": "BaseBdev3", 00:09:22.457 "aliases": [ 00:09:22.457 "79c5a1f1-ed1e-44ff-bde5-e8e82a9bbbdf" 00:09:22.457 ], 00:09:22.457 "product_name": "Malloc disk", 00:09:22.457 "block_size": 512, 00:09:22.457 "num_blocks": 65536, 00:09:22.457 "uuid": "79c5a1f1-ed1e-44ff-bde5-e8e82a9bbbdf", 00:09:22.457 "assigned_rate_limits": { 00:09:22.457 "rw_ios_per_sec": 0, 00:09:22.457 "rw_mbytes_per_sec": 0, 00:09:22.457 "r_mbytes_per_sec": 0, 00:09:22.457 "w_mbytes_per_sec": 0 00:09:22.457 }, 00:09:22.457 "claimed": false, 00:09:22.457 "zoned": false, 00:09:22.457 "supported_io_types": { 00:09:22.457 "read": true, 00:09:22.457 "write": true, 00:09:22.457 "unmap": true, 00:09:22.457 "flush": true, 00:09:22.457 "reset": true, 00:09:22.457 "nvme_admin": false, 00:09:22.457 "nvme_io": false, 00:09:22.457 "nvme_io_md": false, 00:09:22.457 "write_zeroes": true, 00:09:22.457 "zcopy": true, 00:09:22.457 "get_zone_info": false, 00:09:22.457 "zone_management": false, 00:09:22.457 "zone_append": false, 00:09:22.457 "compare": false, 00:09:22.457 "compare_and_write": false, 00:09:22.457 "abort": true, 00:09:22.457 "seek_hole": false, 00:09:22.457 "seek_data": false, 00:09:22.457 "copy": true, 00:09:22.457 "nvme_iov_md": false 00:09:22.457 }, 00:09:22.457 "memory_domains": [ 00:09:22.457 { 00:09:22.457 "dma_device_id": "system", 00:09:22.457 "dma_device_type": 1 00:09:22.457 }, 00:09:22.457 { 00:09:22.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.457 "dma_device_type": 2 00:09:22.457 } 00:09:22.457 ], 00:09:22.457 "driver_specific": {} 00:09:22.457 } 00:09:22.457 ] 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.457 [2024-11-19 10:20:36.158334] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:22.457 [2024-11-19 10:20:36.158441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:22.457 [2024-11-19 10:20:36.158482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:22.457 [2024-11-19 10:20:36.160208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.457 "name": "Existed_Raid", 00:09:22.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.457 "strip_size_kb": 0, 00:09:22.457 "state": "configuring", 00:09:22.457 "raid_level": "raid1", 00:09:22.457 "superblock": false, 00:09:22.457 "num_base_bdevs": 3, 00:09:22.457 "num_base_bdevs_discovered": 2, 00:09:22.457 "num_base_bdevs_operational": 3, 00:09:22.457 "base_bdevs_list": [ 00:09:22.457 { 00:09:22.457 "name": "BaseBdev1", 00:09:22.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.457 "is_configured": false, 00:09:22.457 "data_offset": 0, 00:09:22.457 "data_size": 0 00:09:22.457 }, 00:09:22.457 { 00:09:22.457 "name": "BaseBdev2", 00:09:22.457 "uuid": "0ba60195-a0f2-4e15-a9c7-68b3a4aface8", 00:09:22.457 "is_configured": true, 00:09:22.457 "data_offset": 0, 00:09:22.457 "data_size": 65536 00:09:22.457 }, 00:09:22.457 { 00:09:22.457 "name": "BaseBdev3", 00:09:22.457 "uuid": "79c5a1f1-ed1e-44ff-bde5-e8e82a9bbbdf", 00:09:22.457 "is_configured": true, 00:09:22.457 "data_offset": 0, 00:09:22.457 "data_size": 65536 00:09:22.457 } 00:09:22.457 ] 00:09:22.457 }' 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.457 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.029 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:23.029 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.029 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.029 [2024-11-19 10:20:36.577656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:23.029 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.029 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:23.029 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.029 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.029 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.029 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.029 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.029 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.029 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.029 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.029 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.029 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.029 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.029 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.029 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.029 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.029 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.029 "name": "Existed_Raid", 00:09:23.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.029 "strip_size_kb": 0, 00:09:23.029 "state": "configuring", 00:09:23.029 "raid_level": "raid1", 00:09:23.029 "superblock": false, 00:09:23.029 "num_base_bdevs": 3, 00:09:23.029 "num_base_bdevs_discovered": 1, 00:09:23.029 "num_base_bdevs_operational": 3, 00:09:23.029 "base_bdevs_list": [ 00:09:23.029 { 00:09:23.029 "name": "BaseBdev1", 00:09:23.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.029 "is_configured": false, 00:09:23.029 "data_offset": 0, 00:09:23.029 "data_size": 0 00:09:23.029 }, 00:09:23.029 { 00:09:23.029 "name": null, 00:09:23.029 "uuid": "0ba60195-a0f2-4e15-a9c7-68b3a4aface8", 00:09:23.029 "is_configured": false, 00:09:23.029 "data_offset": 0, 00:09:23.029 "data_size": 65536 00:09:23.029 }, 00:09:23.029 { 00:09:23.029 "name": "BaseBdev3", 00:09:23.029 "uuid": "79c5a1f1-ed1e-44ff-bde5-e8e82a9bbbdf", 00:09:23.029 "is_configured": true, 00:09:23.029 "data_offset": 0, 00:09:23.029 "data_size": 65536 00:09:23.029 } 00:09:23.029 ] 00:09:23.029 }' 00:09:23.029 10:20:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.029 10:20:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.289 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.289 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:23.289 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.289 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.289 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.289 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:23.289 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:23.289 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.289 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.549 [2024-11-19 10:20:37.109288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:23.549 BaseBdev1 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.549 [ 00:09:23.549 { 00:09:23.549 "name": "BaseBdev1", 00:09:23.549 "aliases": [ 00:09:23.549 "cf54d72c-3ddc-4946-a0c2-7e2d0a5bea8f" 00:09:23.549 ], 00:09:23.549 "product_name": "Malloc disk", 00:09:23.549 "block_size": 512, 00:09:23.549 "num_blocks": 65536, 00:09:23.549 "uuid": "cf54d72c-3ddc-4946-a0c2-7e2d0a5bea8f", 00:09:23.549 "assigned_rate_limits": { 00:09:23.549 "rw_ios_per_sec": 0, 00:09:23.549 "rw_mbytes_per_sec": 0, 00:09:23.549 "r_mbytes_per_sec": 0, 00:09:23.549 "w_mbytes_per_sec": 0 00:09:23.549 }, 00:09:23.549 "claimed": true, 00:09:23.549 "claim_type": "exclusive_write", 00:09:23.549 "zoned": false, 00:09:23.549 "supported_io_types": { 00:09:23.549 "read": true, 00:09:23.549 "write": true, 00:09:23.549 "unmap": true, 00:09:23.549 "flush": true, 00:09:23.549 "reset": true, 00:09:23.549 "nvme_admin": false, 00:09:23.549 "nvme_io": false, 00:09:23.549 "nvme_io_md": false, 00:09:23.549 "write_zeroes": true, 00:09:23.549 "zcopy": true, 00:09:23.549 "get_zone_info": false, 00:09:23.549 "zone_management": false, 00:09:23.549 "zone_append": false, 00:09:23.549 "compare": false, 00:09:23.549 "compare_and_write": false, 00:09:23.549 "abort": true, 00:09:23.549 "seek_hole": false, 00:09:23.549 "seek_data": false, 00:09:23.549 "copy": true, 00:09:23.549 "nvme_iov_md": false 00:09:23.549 }, 00:09:23.549 "memory_domains": [ 00:09:23.549 { 00:09:23.549 "dma_device_id": "system", 00:09:23.549 "dma_device_type": 1 00:09:23.549 }, 00:09:23.549 { 00:09:23.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.549 "dma_device_type": 2 00:09:23.549 } 00:09:23.549 ], 00:09:23.549 "driver_specific": {} 00:09:23.549 } 00:09:23.549 ] 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.549 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.549 "name": "Existed_Raid", 00:09:23.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.549 "strip_size_kb": 0, 00:09:23.549 "state": "configuring", 00:09:23.549 "raid_level": "raid1", 00:09:23.549 "superblock": false, 00:09:23.549 "num_base_bdevs": 3, 00:09:23.549 "num_base_bdevs_discovered": 2, 00:09:23.549 "num_base_bdevs_operational": 3, 00:09:23.549 "base_bdevs_list": [ 00:09:23.549 { 00:09:23.549 "name": "BaseBdev1", 00:09:23.549 "uuid": "cf54d72c-3ddc-4946-a0c2-7e2d0a5bea8f", 00:09:23.549 "is_configured": true, 00:09:23.550 "data_offset": 0, 00:09:23.550 "data_size": 65536 00:09:23.550 }, 00:09:23.550 { 00:09:23.550 "name": null, 00:09:23.550 "uuid": "0ba60195-a0f2-4e15-a9c7-68b3a4aface8", 00:09:23.550 "is_configured": false, 00:09:23.550 "data_offset": 0, 00:09:23.550 "data_size": 65536 00:09:23.550 }, 00:09:23.550 { 00:09:23.550 "name": "BaseBdev3", 00:09:23.550 "uuid": "79c5a1f1-ed1e-44ff-bde5-e8e82a9bbbdf", 00:09:23.550 "is_configured": true, 00:09:23.550 "data_offset": 0, 00:09:23.550 "data_size": 65536 00:09:23.550 } 00:09:23.550 ] 00:09:23.550 }' 00:09:23.550 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.550 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.809 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.809 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.809 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.810 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:23.810 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.070 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:24.070 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:24.070 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.070 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.070 [2024-11-19 10:20:37.608427] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:24.070 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.070 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:24.070 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.070 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.070 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.070 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.070 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.070 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.070 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.070 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.070 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.070 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.070 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.070 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.070 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.070 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.070 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.070 "name": "Existed_Raid", 00:09:24.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.070 "strip_size_kb": 0, 00:09:24.070 "state": "configuring", 00:09:24.070 "raid_level": "raid1", 00:09:24.070 "superblock": false, 00:09:24.070 "num_base_bdevs": 3, 00:09:24.070 "num_base_bdevs_discovered": 1, 00:09:24.070 "num_base_bdevs_operational": 3, 00:09:24.070 "base_bdevs_list": [ 00:09:24.070 { 00:09:24.070 "name": "BaseBdev1", 00:09:24.070 "uuid": "cf54d72c-3ddc-4946-a0c2-7e2d0a5bea8f", 00:09:24.070 "is_configured": true, 00:09:24.070 "data_offset": 0, 00:09:24.070 "data_size": 65536 00:09:24.070 }, 00:09:24.070 { 00:09:24.070 "name": null, 00:09:24.070 "uuid": "0ba60195-a0f2-4e15-a9c7-68b3a4aface8", 00:09:24.070 "is_configured": false, 00:09:24.070 "data_offset": 0, 00:09:24.070 "data_size": 65536 00:09:24.070 }, 00:09:24.070 { 00:09:24.070 "name": null, 00:09:24.070 "uuid": "79c5a1f1-ed1e-44ff-bde5-e8e82a9bbbdf", 00:09:24.070 "is_configured": false, 00:09:24.070 "data_offset": 0, 00:09:24.070 "data_size": 65536 00:09:24.070 } 00:09:24.070 ] 00:09:24.070 }' 00:09:24.070 10:20:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.070 10:20:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.330 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.330 10:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.330 10:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.330 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:24.330 10:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.590 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:24.590 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:24.590 10:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.590 10:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.590 [2024-11-19 10:20:38.123621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:24.590 10:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.590 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:24.590 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.590 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.590 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.590 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.590 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.590 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.590 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.590 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.590 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.590 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.590 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.590 10:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.590 10:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.590 10:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.590 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.590 "name": "Existed_Raid", 00:09:24.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.590 "strip_size_kb": 0, 00:09:24.590 "state": "configuring", 00:09:24.590 "raid_level": "raid1", 00:09:24.590 "superblock": false, 00:09:24.590 "num_base_bdevs": 3, 00:09:24.590 "num_base_bdevs_discovered": 2, 00:09:24.590 "num_base_bdevs_operational": 3, 00:09:24.590 "base_bdevs_list": [ 00:09:24.590 { 00:09:24.590 "name": "BaseBdev1", 00:09:24.590 "uuid": "cf54d72c-3ddc-4946-a0c2-7e2d0a5bea8f", 00:09:24.590 "is_configured": true, 00:09:24.590 "data_offset": 0, 00:09:24.590 "data_size": 65536 00:09:24.590 }, 00:09:24.590 { 00:09:24.590 "name": null, 00:09:24.590 "uuid": "0ba60195-a0f2-4e15-a9c7-68b3a4aface8", 00:09:24.590 "is_configured": false, 00:09:24.590 "data_offset": 0, 00:09:24.590 "data_size": 65536 00:09:24.590 }, 00:09:24.590 { 00:09:24.590 "name": "BaseBdev3", 00:09:24.590 "uuid": "79c5a1f1-ed1e-44ff-bde5-e8e82a9bbbdf", 00:09:24.590 "is_configured": true, 00:09:24.590 "data_offset": 0, 00:09:24.590 "data_size": 65536 00:09:24.590 } 00:09:24.590 ] 00:09:24.590 }' 00:09:24.590 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.590 10:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.850 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:24.850 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.850 10:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.850 10:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.850 10:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.850 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:24.850 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:24.850 10:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.850 10:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.850 [2024-11-19 10:20:38.614803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:25.111 10:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.111 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:25.111 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.111 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.111 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.111 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.111 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.111 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.111 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.111 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.111 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.111 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.111 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.111 10:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.111 10:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.111 10:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.111 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.111 "name": "Existed_Raid", 00:09:25.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.111 "strip_size_kb": 0, 00:09:25.111 "state": "configuring", 00:09:25.111 "raid_level": "raid1", 00:09:25.111 "superblock": false, 00:09:25.111 "num_base_bdevs": 3, 00:09:25.111 "num_base_bdevs_discovered": 1, 00:09:25.111 "num_base_bdevs_operational": 3, 00:09:25.111 "base_bdevs_list": [ 00:09:25.111 { 00:09:25.111 "name": null, 00:09:25.111 "uuid": "cf54d72c-3ddc-4946-a0c2-7e2d0a5bea8f", 00:09:25.111 "is_configured": false, 00:09:25.111 "data_offset": 0, 00:09:25.111 "data_size": 65536 00:09:25.111 }, 00:09:25.111 { 00:09:25.111 "name": null, 00:09:25.111 "uuid": "0ba60195-a0f2-4e15-a9c7-68b3a4aface8", 00:09:25.111 "is_configured": false, 00:09:25.111 "data_offset": 0, 00:09:25.111 "data_size": 65536 00:09:25.111 }, 00:09:25.111 { 00:09:25.111 "name": "BaseBdev3", 00:09:25.111 "uuid": "79c5a1f1-ed1e-44ff-bde5-e8e82a9bbbdf", 00:09:25.111 "is_configured": true, 00:09:25.111 "data_offset": 0, 00:09:25.111 "data_size": 65536 00:09:25.111 } 00:09:25.111 ] 00:09:25.111 }' 00:09:25.111 10:20:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.111 10:20:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.371 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.371 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.371 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.371 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:25.371 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.371 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:25.371 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:25.371 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.371 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.645 [2024-11-19 10:20:39.155275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:25.645 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.645 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:25.645 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.645 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.645 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.645 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.645 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.645 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.645 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.645 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.645 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.645 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.645 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.645 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.645 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.645 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.645 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.645 "name": "Existed_Raid", 00:09:25.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.645 "strip_size_kb": 0, 00:09:25.645 "state": "configuring", 00:09:25.645 "raid_level": "raid1", 00:09:25.645 "superblock": false, 00:09:25.645 "num_base_bdevs": 3, 00:09:25.645 "num_base_bdevs_discovered": 2, 00:09:25.645 "num_base_bdevs_operational": 3, 00:09:25.645 "base_bdevs_list": [ 00:09:25.645 { 00:09:25.645 "name": null, 00:09:25.645 "uuid": "cf54d72c-3ddc-4946-a0c2-7e2d0a5bea8f", 00:09:25.645 "is_configured": false, 00:09:25.645 "data_offset": 0, 00:09:25.645 "data_size": 65536 00:09:25.645 }, 00:09:25.645 { 00:09:25.645 "name": "BaseBdev2", 00:09:25.645 "uuid": "0ba60195-a0f2-4e15-a9c7-68b3a4aface8", 00:09:25.645 "is_configured": true, 00:09:25.645 "data_offset": 0, 00:09:25.645 "data_size": 65536 00:09:25.645 }, 00:09:25.645 { 00:09:25.645 "name": "BaseBdev3", 00:09:25.645 "uuid": "79c5a1f1-ed1e-44ff-bde5-e8e82a9bbbdf", 00:09:25.645 "is_configured": true, 00:09:25.645 "data_offset": 0, 00:09:25.645 "data_size": 65536 00:09:25.645 } 00:09:25.645 ] 00:09:25.645 }' 00:09:25.645 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.645 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.909 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:25.909 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.909 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.909 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.909 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.909 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:25.909 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.909 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:25.909 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.909 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.909 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.909 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cf54d72c-3ddc-4946-a0c2-7e2d0a5bea8f 00:09:25.909 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.909 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.169 [2024-11-19 10:20:39.705738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:26.169 [2024-11-19 10:20:39.705781] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:26.169 [2024-11-19 10:20:39.705788] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:26.169 [2024-11-19 10:20:39.706056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:26.169 [2024-11-19 10:20:39.706230] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:26.169 [2024-11-19 10:20:39.706243] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:26.169 [2024-11-19 10:20:39.706470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.169 NewBaseBdev 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.169 [ 00:09:26.169 { 00:09:26.169 "name": "NewBaseBdev", 00:09:26.169 "aliases": [ 00:09:26.169 "cf54d72c-3ddc-4946-a0c2-7e2d0a5bea8f" 00:09:26.169 ], 00:09:26.169 "product_name": "Malloc disk", 00:09:26.169 "block_size": 512, 00:09:26.169 "num_blocks": 65536, 00:09:26.169 "uuid": "cf54d72c-3ddc-4946-a0c2-7e2d0a5bea8f", 00:09:26.169 "assigned_rate_limits": { 00:09:26.169 "rw_ios_per_sec": 0, 00:09:26.169 "rw_mbytes_per_sec": 0, 00:09:26.169 "r_mbytes_per_sec": 0, 00:09:26.169 "w_mbytes_per_sec": 0 00:09:26.169 }, 00:09:26.169 "claimed": true, 00:09:26.169 "claim_type": "exclusive_write", 00:09:26.169 "zoned": false, 00:09:26.169 "supported_io_types": { 00:09:26.169 "read": true, 00:09:26.169 "write": true, 00:09:26.169 "unmap": true, 00:09:26.169 "flush": true, 00:09:26.169 "reset": true, 00:09:26.169 "nvme_admin": false, 00:09:26.169 "nvme_io": false, 00:09:26.169 "nvme_io_md": false, 00:09:26.169 "write_zeroes": true, 00:09:26.169 "zcopy": true, 00:09:26.169 "get_zone_info": false, 00:09:26.169 "zone_management": false, 00:09:26.169 "zone_append": false, 00:09:26.169 "compare": false, 00:09:26.169 "compare_and_write": false, 00:09:26.169 "abort": true, 00:09:26.169 "seek_hole": false, 00:09:26.169 "seek_data": false, 00:09:26.169 "copy": true, 00:09:26.169 "nvme_iov_md": false 00:09:26.169 }, 00:09:26.169 "memory_domains": [ 00:09:26.169 { 00:09:26.169 "dma_device_id": "system", 00:09:26.169 "dma_device_type": 1 00:09:26.169 }, 00:09:26.169 { 00:09:26.169 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.169 "dma_device_type": 2 00:09:26.169 } 00:09:26.169 ], 00:09:26.169 "driver_specific": {} 00:09:26.169 } 00:09:26.169 ] 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.169 "name": "Existed_Raid", 00:09:26.169 "uuid": "ed82614f-4753-49f3-b86e-e465f3af5c84", 00:09:26.169 "strip_size_kb": 0, 00:09:26.169 "state": "online", 00:09:26.169 "raid_level": "raid1", 00:09:26.169 "superblock": false, 00:09:26.169 "num_base_bdevs": 3, 00:09:26.169 "num_base_bdevs_discovered": 3, 00:09:26.169 "num_base_bdevs_operational": 3, 00:09:26.169 "base_bdevs_list": [ 00:09:26.169 { 00:09:26.169 "name": "NewBaseBdev", 00:09:26.169 "uuid": "cf54d72c-3ddc-4946-a0c2-7e2d0a5bea8f", 00:09:26.169 "is_configured": true, 00:09:26.169 "data_offset": 0, 00:09:26.169 "data_size": 65536 00:09:26.169 }, 00:09:26.169 { 00:09:26.169 "name": "BaseBdev2", 00:09:26.169 "uuid": "0ba60195-a0f2-4e15-a9c7-68b3a4aface8", 00:09:26.169 "is_configured": true, 00:09:26.169 "data_offset": 0, 00:09:26.169 "data_size": 65536 00:09:26.169 }, 00:09:26.169 { 00:09:26.169 "name": "BaseBdev3", 00:09:26.169 "uuid": "79c5a1f1-ed1e-44ff-bde5-e8e82a9bbbdf", 00:09:26.169 "is_configured": true, 00:09:26.169 "data_offset": 0, 00:09:26.169 "data_size": 65536 00:09:26.169 } 00:09:26.169 ] 00:09:26.169 }' 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.169 10:20:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.429 10:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:26.429 10:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:26.429 10:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:26.429 10:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:26.429 10:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:26.430 10:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:26.430 10:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:26.430 10:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:26.430 10:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.430 10:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.430 [2024-11-19 10:20:40.161274] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:26.430 10:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.430 10:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:26.430 "name": "Existed_Raid", 00:09:26.430 "aliases": [ 00:09:26.430 "ed82614f-4753-49f3-b86e-e465f3af5c84" 00:09:26.430 ], 00:09:26.430 "product_name": "Raid Volume", 00:09:26.430 "block_size": 512, 00:09:26.430 "num_blocks": 65536, 00:09:26.430 "uuid": "ed82614f-4753-49f3-b86e-e465f3af5c84", 00:09:26.430 "assigned_rate_limits": { 00:09:26.430 "rw_ios_per_sec": 0, 00:09:26.430 "rw_mbytes_per_sec": 0, 00:09:26.430 "r_mbytes_per_sec": 0, 00:09:26.430 "w_mbytes_per_sec": 0 00:09:26.430 }, 00:09:26.430 "claimed": false, 00:09:26.430 "zoned": false, 00:09:26.430 "supported_io_types": { 00:09:26.430 "read": true, 00:09:26.430 "write": true, 00:09:26.430 "unmap": false, 00:09:26.430 "flush": false, 00:09:26.430 "reset": true, 00:09:26.430 "nvme_admin": false, 00:09:26.430 "nvme_io": false, 00:09:26.430 "nvme_io_md": false, 00:09:26.430 "write_zeroes": true, 00:09:26.430 "zcopy": false, 00:09:26.430 "get_zone_info": false, 00:09:26.430 "zone_management": false, 00:09:26.430 "zone_append": false, 00:09:26.430 "compare": false, 00:09:26.430 "compare_and_write": false, 00:09:26.430 "abort": false, 00:09:26.430 "seek_hole": false, 00:09:26.430 "seek_data": false, 00:09:26.430 "copy": false, 00:09:26.430 "nvme_iov_md": false 00:09:26.430 }, 00:09:26.430 "memory_domains": [ 00:09:26.430 { 00:09:26.430 "dma_device_id": "system", 00:09:26.430 "dma_device_type": 1 00:09:26.430 }, 00:09:26.430 { 00:09:26.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.430 "dma_device_type": 2 00:09:26.430 }, 00:09:26.430 { 00:09:26.430 "dma_device_id": "system", 00:09:26.430 "dma_device_type": 1 00:09:26.430 }, 00:09:26.430 { 00:09:26.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.430 "dma_device_type": 2 00:09:26.430 }, 00:09:26.430 { 00:09:26.430 "dma_device_id": "system", 00:09:26.430 "dma_device_type": 1 00:09:26.430 }, 00:09:26.430 { 00:09:26.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.430 "dma_device_type": 2 00:09:26.430 } 00:09:26.430 ], 00:09:26.430 "driver_specific": { 00:09:26.430 "raid": { 00:09:26.430 "uuid": "ed82614f-4753-49f3-b86e-e465f3af5c84", 00:09:26.430 "strip_size_kb": 0, 00:09:26.430 "state": "online", 00:09:26.430 "raid_level": "raid1", 00:09:26.430 "superblock": false, 00:09:26.430 "num_base_bdevs": 3, 00:09:26.430 "num_base_bdevs_discovered": 3, 00:09:26.430 "num_base_bdevs_operational": 3, 00:09:26.430 "base_bdevs_list": [ 00:09:26.430 { 00:09:26.430 "name": "NewBaseBdev", 00:09:26.430 "uuid": "cf54d72c-3ddc-4946-a0c2-7e2d0a5bea8f", 00:09:26.430 "is_configured": true, 00:09:26.430 "data_offset": 0, 00:09:26.430 "data_size": 65536 00:09:26.430 }, 00:09:26.430 { 00:09:26.430 "name": "BaseBdev2", 00:09:26.430 "uuid": "0ba60195-a0f2-4e15-a9c7-68b3a4aface8", 00:09:26.430 "is_configured": true, 00:09:26.430 "data_offset": 0, 00:09:26.430 "data_size": 65536 00:09:26.430 }, 00:09:26.430 { 00:09:26.430 "name": "BaseBdev3", 00:09:26.430 "uuid": "79c5a1f1-ed1e-44ff-bde5-e8e82a9bbbdf", 00:09:26.430 "is_configured": true, 00:09:26.430 "data_offset": 0, 00:09:26.430 "data_size": 65536 00:09:26.430 } 00:09:26.430 ] 00:09:26.430 } 00:09:26.430 } 00:09:26.430 }' 00:09:26.430 10:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:26.689 10:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:26.689 BaseBdev2 00:09:26.689 BaseBdev3' 00:09:26.689 10:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.689 10:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:26.689 10:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.689 10:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.689 10:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:26.689 10:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.689 10:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.690 10:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.690 10:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.690 10:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.690 10:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.690 10:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:26.690 10:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.690 10:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.690 10:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.690 10:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.690 10:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.690 10:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.690 10:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:26.690 10:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:26.690 10:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.690 10:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.690 10:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:26.690 10:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.690 10:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:26.690 10:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:26.690 10:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:26.690 10:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.690 10:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.690 [2024-11-19 10:20:40.448504] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:26.690 [2024-11-19 10:20:40.448576] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:26.690 [2024-11-19 10:20:40.448668] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:26.690 [2024-11-19 10:20:40.448970] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:26.690 [2024-11-19 10:20:40.449060] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:26.690 10:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.690 10:20:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67207 00:09:26.690 10:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67207 ']' 00:09:26.690 10:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67207 00:09:26.690 10:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:26.690 10:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:26.690 10:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67207 00:09:26.949 killing process with pid 67207 00:09:26.949 10:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:26.949 10:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:26.949 10:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67207' 00:09:26.949 10:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67207 00:09:26.949 [2024-11-19 10:20:40.494613] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:26.949 10:20:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67207 00:09:27.208 [2024-11-19 10:20:40.782421] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:28.154 10:20:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:28.154 00:09:28.154 real 0m10.374s 00:09:28.154 user 0m16.610s 00:09:28.154 sys 0m1.733s 00:09:28.154 10:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.154 10:20:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.154 ************************************ 00:09:28.154 END TEST raid_state_function_test 00:09:28.154 ************************************ 00:09:28.154 10:20:41 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:28.154 10:20:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:28.154 10:20:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.154 10:20:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:28.154 ************************************ 00:09:28.154 START TEST raid_state_function_test_sb 00:09:28.154 ************************************ 00:09:28.154 10:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:09:28.425 10:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:28.425 10:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:28.425 10:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:28.425 10:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:28.425 10:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:28.425 10:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:28.425 10:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:28.425 10:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:28.425 10:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:28.425 10:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:28.425 10:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:28.425 10:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:28.425 10:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:28.425 10:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:28.425 10:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:28.425 10:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:28.425 10:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:28.425 10:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:28.425 10:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:28.425 10:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:28.425 10:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:28.425 10:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:28.425 10:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:28.425 10:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:28.425 10:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:28.425 Process raid pid: 67828 00:09:28.425 10:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67828 00:09:28.425 10:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:28.425 10:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67828' 00:09:28.425 10:20:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67828 00:09:28.425 10:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 67828 ']' 00:09:28.426 10:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.426 10:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.426 10:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.426 10:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.426 10:20:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.426 [2024-11-19 10:20:42.024675] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:09:28.426 [2024-11-19 10:20:42.024877] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:28.426 [2024-11-19 10:20:42.198157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.685 [2024-11-19 10:20:42.307494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.944 [2024-11-19 10:20:42.505960] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.944 [2024-11-19 10:20:42.506061] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.204 10:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.204 10:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:29.204 10:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:29.204 10:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.204 10:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.204 [2024-11-19 10:20:42.845657] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:29.204 [2024-11-19 10:20:42.845783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:29.204 [2024-11-19 10:20:42.845831] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:29.204 [2024-11-19 10:20:42.845855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:29.204 [2024-11-19 10:20:42.845864] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:29.204 [2024-11-19 10:20:42.845873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:29.204 10:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.204 10:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:29.204 10:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.204 10:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.204 10:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.204 10:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.204 10:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.204 10:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.204 10:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.204 10:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.204 10:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.205 10:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.205 10:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.205 10:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.205 10:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.205 10:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.205 10:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.205 "name": "Existed_Raid", 00:09:29.205 "uuid": "5aa6aab4-16cb-445f-91d4-0f28ee3c6ff0", 00:09:29.205 "strip_size_kb": 0, 00:09:29.205 "state": "configuring", 00:09:29.205 "raid_level": "raid1", 00:09:29.205 "superblock": true, 00:09:29.205 "num_base_bdevs": 3, 00:09:29.205 "num_base_bdevs_discovered": 0, 00:09:29.205 "num_base_bdevs_operational": 3, 00:09:29.205 "base_bdevs_list": [ 00:09:29.205 { 00:09:29.205 "name": "BaseBdev1", 00:09:29.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.205 "is_configured": false, 00:09:29.205 "data_offset": 0, 00:09:29.205 "data_size": 0 00:09:29.205 }, 00:09:29.205 { 00:09:29.205 "name": "BaseBdev2", 00:09:29.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.205 "is_configured": false, 00:09:29.205 "data_offset": 0, 00:09:29.205 "data_size": 0 00:09:29.205 }, 00:09:29.205 { 00:09:29.205 "name": "BaseBdev3", 00:09:29.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.205 "is_configured": false, 00:09:29.205 "data_offset": 0, 00:09:29.205 "data_size": 0 00:09:29.205 } 00:09:29.205 ] 00:09:29.205 }' 00:09:29.205 10:20:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.205 10:20:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.774 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:29.774 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.774 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.774 [2024-11-19 10:20:43.300850] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:29.774 [2024-11-19 10:20:43.300962] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:29.774 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.774 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:29.774 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.774 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.774 [2024-11-19 10:20:43.312832] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:29.774 [2024-11-19 10:20:43.312929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:29.774 [2024-11-19 10:20:43.312957] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:29.774 [2024-11-19 10:20:43.312979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:29.774 [2024-11-19 10:20:43.313011] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:29.774 [2024-11-19 10:20:43.313050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:29.774 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.774 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:29.774 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.774 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.774 [2024-11-19 10:20:43.359300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:29.774 BaseBdev1 00:09:29.774 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.774 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:29.774 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:29.774 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:29.774 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:29.775 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:29.775 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:29.775 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:29.775 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.775 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.775 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.775 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:29.775 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.775 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.775 [ 00:09:29.775 { 00:09:29.775 "name": "BaseBdev1", 00:09:29.775 "aliases": [ 00:09:29.775 "51fc85a4-f7e1-4955-8b7a-0bd5f6dff79f" 00:09:29.775 ], 00:09:29.775 "product_name": "Malloc disk", 00:09:29.775 "block_size": 512, 00:09:29.775 "num_blocks": 65536, 00:09:29.775 "uuid": "51fc85a4-f7e1-4955-8b7a-0bd5f6dff79f", 00:09:29.775 "assigned_rate_limits": { 00:09:29.775 "rw_ios_per_sec": 0, 00:09:29.775 "rw_mbytes_per_sec": 0, 00:09:29.775 "r_mbytes_per_sec": 0, 00:09:29.775 "w_mbytes_per_sec": 0 00:09:29.775 }, 00:09:29.775 "claimed": true, 00:09:29.775 "claim_type": "exclusive_write", 00:09:29.775 "zoned": false, 00:09:29.775 "supported_io_types": { 00:09:29.775 "read": true, 00:09:29.775 "write": true, 00:09:29.775 "unmap": true, 00:09:29.775 "flush": true, 00:09:29.775 "reset": true, 00:09:29.775 "nvme_admin": false, 00:09:29.775 "nvme_io": false, 00:09:29.775 "nvme_io_md": false, 00:09:29.775 "write_zeroes": true, 00:09:29.775 "zcopy": true, 00:09:29.775 "get_zone_info": false, 00:09:29.775 "zone_management": false, 00:09:29.775 "zone_append": false, 00:09:29.775 "compare": false, 00:09:29.775 "compare_and_write": false, 00:09:29.775 "abort": true, 00:09:29.775 "seek_hole": false, 00:09:29.775 "seek_data": false, 00:09:29.775 "copy": true, 00:09:29.775 "nvme_iov_md": false 00:09:29.775 }, 00:09:29.775 "memory_domains": [ 00:09:29.775 { 00:09:29.775 "dma_device_id": "system", 00:09:29.775 "dma_device_type": 1 00:09:29.775 }, 00:09:29.775 { 00:09:29.775 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.775 "dma_device_type": 2 00:09:29.775 } 00:09:29.775 ], 00:09:29.775 "driver_specific": {} 00:09:29.775 } 00:09:29.775 ] 00:09:29.775 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.775 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:29.775 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:29.775 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.775 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.775 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.775 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.775 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.775 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.775 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.775 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.775 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.775 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.775 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.775 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.775 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.775 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.775 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.775 "name": "Existed_Raid", 00:09:29.775 "uuid": "d305ae52-ebd0-4f25-9c10-d1c35d288a62", 00:09:29.775 "strip_size_kb": 0, 00:09:29.775 "state": "configuring", 00:09:29.775 "raid_level": "raid1", 00:09:29.775 "superblock": true, 00:09:29.775 "num_base_bdevs": 3, 00:09:29.775 "num_base_bdevs_discovered": 1, 00:09:29.775 "num_base_bdevs_operational": 3, 00:09:29.775 "base_bdevs_list": [ 00:09:29.775 { 00:09:29.775 "name": "BaseBdev1", 00:09:29.775 "uuid": "51fc85a4-f7e1-4955-8b7a-0bd5f6dff79f", 00:09:29.775 "is_configured": true, 00:09:29.775 "data_offset": 2048, 00:09:29.775 "data_size": 63488 00:09:29.775 }, 00:09:29.775 { 00:09:29.775 "name": "BaseBdev2", 00:09:29.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.775 "is_configured": false, 00:09:29.775 "data_offset": 0, 00:09:29.775 "data_size": 0 00:09:29.775 }, 00:09:29.775 { 00:09:29.775 "name": "BaseBdev3", 00:09:29.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.775 "is_configured": false, 00:09:29.775 "data_offset": 0, 00:09:29.775 "data_size": 0 00:09:29.775 } 00:09:29.775 ] 00:09:29.775 }' 00:09:29.775 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.775 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.344 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:30.344 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.344 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.344 [2024-11-19 10:20:43.854476] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:30.344 [2024-11-19 10:20:43.854589] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:30.344 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.344 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:30.344 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.344 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.344 [2024-11-19 10:20:43.866494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:30.344 [2024-11-19 10:20:43.868294] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:30.344 [2024-11-19 10:20:43.868371] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:30.344 [2024-11-19 10:20:43.868400] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:30.344 [2024-11-19 10:20:43.868411] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:30.344 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.344 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:30.344 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:30.344 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:30.344 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.344 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.344 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.344 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.344 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.344 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.344 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.344 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.344 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.344 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.344 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.344 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.344 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.344 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.344 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.344 "name": "Existed_Raid", 00:09:30.344 "uuid": "fd907f48-909b-4387-98bc-2b4b71bbc266", 00:09:30.344 "strip_size_kb": 0, 00:09:30.344 "state": "configuring", 00:09:30.344 "raid_level": "raid1", 00:09:30.344 "superblock": true, 00:09:30.344 "num_base_bdevs": 3, 00:09:30.344 "num_base_bdevs_discovered": 1, 00:09:30.344 "num_base_bdevs_operational": 3, 00:09:30.344 "base_bdevs_list": [ 00:09:30.344 { 00:09:30.344 "name": "BaseBdev1", 00:09:30.344 "uuid": "51fc85a4-f7e1-4955-8b7a-0bd5f6dff79f", 00:09:30.344 "is_configured": true, 00:09:30.344 "data_offset": 2048, 00:09:30.344 "data_size": 63488 00:09:30.344 }, 00:09:30.344 { 00:09:30.344 "name": "BaseBdev2", 00:09:30.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.344 "is_configured": false, 00:09:30.344 "data_offset": 0, 00:09:30.344 "data_size": 0 00:09:30.344 }, 00:09:30.344 { 00:09:30.344 "name": "BaseBdev3", 00:09:30.344 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.344 "is_configured": false, 00:09:30.344 "data_offset": 0, 00:09:30.344 "data_size": 0 00:09:30.344 } 00:09:30.344 ] 00:09:30.344 }' 00:09:30.344 10:20:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.344 10:20:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.603 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:30.603 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.603 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.603 [2024-11-19 10:20:44.342660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:30.603 BaseBdev2 00:09:30.603 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.603 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:30.603 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:30.603 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:30.603 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:30.604 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:30.604 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:30.604 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:30.604 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.604 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.604 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.604 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:30.604 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.604 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.604 [ 00:09:30.604 { 00:09:30.604 "name": "BaseBdev2", 00:09:30.604 "aliases": [ 00:09:30.604 "152e9a99-371a-4517-a490-9d2803459dfb" 00:09:30.604 ], 00:09:30.604 "product_name": "Malloc disk", 00:09:30.604 "block_size": 512, 00:09:30.604 "num_blocks": 65536, 00:09:30.604 "uuid": "152e9a99-371a-4517-a490-9d2803459dfb", 00:09:30.604 "assigned_rate_limits": { 00:09:30.604 "rw_ios_per_sec": 0, 00:09:30.604 "rw_mbytes_per_sec": 0, 00:09:30.604 "r_mbytes_per_sec": 0, 00:09:30.604 "w_mbytes_per_sec": 0 00:09:30.604 }, 00:09:30.604 "claimed": true, 00:09:30.604 "claim_type": "exclusive_write", 00:09:30.604 "zoned": false, 00:09:30.604 "supported_io_types": { 00:09:30.604 "read": true, 00:09:30.604 "write": true, 00:09:30.604 "unmap": true, 00:09:30.604 "flush": true, 00:09:30.604 "reset": true, 00:09:30.604 "nvme_admin": false, 00:09:30.604 "nvme_io": false, 00:09:30.604 "nvme_io_md": false, 00:09:30.604 "write_zeroes": true, 00:09:30.604 "zcopy": true, 00:09:30.604 "get_zone_info": false, 00:09:30.604 "zone_management": false, 00:09:30.604 "zone_append": false, 00:09:30.604 "compare": false, 00:09:30.604 "compare_and_write": false, 00:09:30.604 "abort": true, 00:09:30.604 "seek_hole": false, 00:09:30.604 "seek_data": false, 00:09:30.604 "copy": true, 00:09:30.604 "nvme_iov_md": false 00:09:30.604 }, 00:09:30.604 "memory_domains": [ 00:09:30.604 { 00:09:30.604 "dma_device_id": "system", 00:09:30.604 "dma_device_type": 1 00:09:30.604 }, 00:09:30.604 { 00:09:30.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.604 "dma_device_type": 2 00:09:30.604 } 00:09:30.604 ], 00:09:30.604 "driver_specific": {} 00:09:30.604 } 00:09:30.604 ] 00:09:30.604 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.604 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:30.604 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:30.604 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:30.604 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:30.604 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.604 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.863 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.863 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.863 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.863 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.863 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.863 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.863 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.864 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.864 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.864 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.864 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.864 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.864 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.864 "name": "Existed_Raid", 00:09:30.864 "uuid": "fd907f48-909b-4387-98bc-2b4b71bbc266", 00:09:30.864 "strip_size_kb": 0, 00:09:30.864 "state": "configuring", 00:09:30.864 "raid_level": "raid1", 00:09:30.864 "superblock": true, 00:09:30.864 "num_base_bdevs": 3, 00:09:30.864 "num_base_bdevs_discovered": 2, 00:09:30.864 "num_base_bdevs_operational": 3, 00:09:30.864 "base_bdevs_list": [ 00:09:30.864 { 00:09:30.864 "name": "BaseBdev1", 00:09:30.864 "uuid": "51fc85a4-f7e1-4955-8b7a-0bd5f6dff79f", 00:09:30.864 "is_configured": true, 00:09:30.864 "data_offset": 2048, 00:09:30.864 "data_size": 63488 00:09:30.864 }, 00:09:30.864 { 00:09:30.864 "name": "BaseBdev2", 00:09:30.864 "uuid": "152e9a99-371a-4517-a490-9d2803459dfb", 00:09:30.864 "is_configured": true, 00:09:30.864 "data_offset": 2048, 00:09:30.864 "data_size": 63488 00:09:30.864 }, 00:09:30.864 { 00:09:30.864 "name": "BaseBdev3", 00:09:30.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.864 "is_configured": false, 00:09:30.864 "data_offset": 0, 00:09:30.864 "data_size": 0 00:09:30.864 } 00:09:30.864 ] 00:09:30.864 }' 00:09:30.864 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.864 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.124 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:31.124 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.124 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.124 [2024-11-19 10:20:44.874637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:31.124 [2024-11-19 10:20:44.874884] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:31.124 [2024-11-19 10:20:44.874909] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:31.124 [2024-11-19 10:20:44.875236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:31.124 [2024-11-19 10:20:44.875394] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:31.124 [2024-11-19 10:20:44.875403] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:31.124 BaseBdev3 00:09:31.124 [2024-11-19 10:20:44.875553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.124 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.124 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:31.124 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:31.124 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:31.124 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:31.124 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:31.124 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:31.124 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:31.124 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.124 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.124 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.124 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:31.124 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.124 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.124 [ 00:09:31.124 { 00:09:31.124 "name": "BaseBdev3", 00:09:31.124 "aliases": [ 00:09:31.124 "d7b68923-100c-4035-91e3-4a30a4660ca5" 00:09:31.124 ], 00:09:31.124 "product_name": "Malloc disk", 00:09:31.124 "block_size": 512, 00:09:31.124 "num_blocks": 65536, 00:09:31.124 "uuid": "d7b68923-100c-4035-91e3-4a30a4660ca5", 00:09:31.124 "assigned_rate_limits": { 00:09:31.124 "rw_ios_per_sec": 0, 00:09:31.124 "rw_mbytes_per_sec": 0, 00:09:31.124 "r_mbytes_per_sec": 0, 00:09:31.384 "w_mbytes_per_sec": 0 00:09:31.384 }, 00:09:31.384 "claimed": true, 00:09:31.384 "claim_type": "exclusive_write", 00:09:31.384 "zoned": false, 00:09:31.384 "supported_io_types": { 00:09:31.384 "read": true, 00:09:31.384 "write": true, 00:09:31.384 "unmap": true, 00:09:31.384 "flush": true, 00:09:31.384 "reset": true, 00:09:31.384 "nvme_admin": false, 00:09:31.384 "nvme_io": false, 00:09:31.384 "nvme_io_md": false, 00:09:31.384 "write_zeroes": true, 00:09:31.384 "zcopy": true, 00:09:31.384 "get_zone_info": false, 00:09:31.384 "zone_management": false, 00:09:31.384 "zone_append": false, 00:09:31.384 "compare": false, 00:09:31.384 "compare_and_write": false, 00:09:31.384 "abort": true, 00:09:31.384 "seek_hole": false, 00:09:31.384 "seek_data": false, 00:09:31.384 "copy": true, 00:09:31.384 "nvme_iov_md": false 00:09:31.385 }, 00:09:31.385 "memory_domains": [ 00:09:31.385 { 00:09:31.385 "dma_device_id": "system", 00:09:31.385 "dma_device_type": 1 00:09:31.385 }, 00:09:31.385 { 00:09:31.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.385 "dma_device_type": 2 00:09:31.385 } 00:09:31.385 ], 00:09:31.385 "driver_specific": {} 00:09:31.385 } 00:09:31.385 ] 00:09:31.385 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.385 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:31.385 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:31.385 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:31.385 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:31.385 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.385 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.385 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.385 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.385 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.385 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.385 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.385 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.385 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.385 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.385 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.385 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.385 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.385 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.385 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.385 "name": "Existed_Raid", 00:09:31.385 "uuid": "fd907f48-909b-4387-98bc-2b4b71bbc266", 00:09:31.385 "strip_size_kb": 0, 00:09:31.385 "state": "online", 00:09:31.385 "raid_level": "raid1", 00:09:31.385 "superblock": true, 00:09:31.385 "num_base_bdevs": 3, 00:09:31.385 "num_base_bdevs_discovered": 3, 00:09:31.385 "num_base_bdevs_operational": 3, 00:09:31.385 "base_bdevs_list": [ 00:09:31.385 { 00:09:31.385 "name": "BaseBdev1", 00:09:31.385 "uuid": "51fc85a4-f7e1-4955-8b7a-0bd5f6dff79f", 00:09:31.385 "is_configured": true, 00:09:31.385 "data_offset": 2048, 00:09:31.385 "data_size": 63488 00:09:31.385 }, 00:09:31.385 { 00:09:31.385 "name": "BaseBdev2", 00:09:31.385 "uuid": "152e9a99-371a-4517-a490-9d2803459dfb", 00:09:31.385 "is_configured": true, 00:09:31.385 "data_offset": 2048, 00:09:31.385 "data_size": 63488 00:09:31.385 }, 00:09:31.385 { 00:09:31.385 "name": "BaseBdev3", 00:09:31.385 "uuid": "d7b68923-100c-4035-91e3-4a30a4660ca5", 00:09:31.385 "is_configured": true, 00:09:31.385 "data_offset": 2048, 00:09:31.385 "data_size": 63488 00:09:31.385 } 00:09:31.385 ] 00:09:31.385 }' 00:09:31.385 10:20:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.385 10:20:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.646 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:31.646 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:31.646 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:31.646 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:31.646 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:31.646 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:31.646 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:31.646 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:31.646 10:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.646 10:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.646 [2024-11-19 10:20:45.370096] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:31.646 10:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.646 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:31.646 "name": "Existed_Raid", 00:09:31.646 "aliases": [ 00:09:31.646 "fd907f48-909b-4387-98bc-2b4b71bbc266" 00:09:31.646 ], 00:09:31.646 "product_name": "Raid Volume", 00:09:31.646 "block_size": 512, 00:09:31.646 "num_blocks": 63488, 00:09:31.646 "uuid": "fd907f48-909b-4387-98bc-2b4b71bbc266", 00:09:31.646 "assigned_rate_limits": { 00:09:31.646 "rw_ios_per_sec": 0, 00:09:31.646 "rw_mbytes_per_sec": 0, 00:09:31.646 "r_mbytes_per_sec": 0, 00:09:31.646 "w_mbytes_per_sec": 0 00:09:31.646 }, 00:09:31.646 "claimed": false, 00:09:31.646 "zoned": false, 00:09:31.646 "supported_io_types": { 00:09:31.646 "read": true, 00:09:31.646 "write": true, 00:09:31.646 "unmap": false, 00:09:31.646 "flush": false, 00:09:31.646 "reset": true, 00:09:31.646 "nvme_admin": false, 00:09:31.646 "nvme_io": false, 00:09:31.646 "nvme_io_md": false, 00:09:31.646 "write_zeroes": true, 00:09:31.646 "zcopy": false, 00:09:31.646 "get_zone_info": false, 00:09:31.646 "zone_management": false, 00:09:31.646 "zone_append": false, 00:09:31.646 "compare": false, 00:09:31.646 "compare_and_write": false, 00:09:31.646 "abort": false, 00:09:31.646 "seek_hole": false, 00:09:31.646 "seek_data": false, 00:09:31.646 "copy": false, 00:09:31.646 "nvme_iov_md": false 00:09:31.646 }, 00:09:31.646 "memory_domains": [ 00:09:31.646 { 00:09:31.646 "dma_device_id": "system", 00:09:31.646 "dma_device_type": 1 00:09:31.646 }, 00:09:31.646 { 00:09:31.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.646 "dma_device_type": 2 00:09:31.646 }, 00:09:31.646 { 00:09:31.646 "dma_device_id": "system", 00:09:31.646 "dma_device_type": 1 00:09:31.646 }, 00:09:31.646 { 00:09:31.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.646 "dma_device_type": 2 00:09:31.646 }, 00:09:31.646 { 00:09:31.646 "dma_device_id": "system", 00:09:31.646 "dma_device_type": 1 00:09:31.646 }, 00:09:31.646 { 00:09:31.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.646 "dma_device_type": 2 00:09:31.646 } 00:09:31.646 ], 00:09:31.646 "driver_specific": { 00:09:31.646 "raid": { 00:09:31.646 "uuid": "fd907f48-909b-4387-98bc-2b4b71bbc266", 00:09:31.646 "strip_size_kb": 0, 00:09:31.646 "state": "online", 00:09:31.646 "raid_level": "raid1", 00:09:31.646 "superblock": true, 00:09:31.646 "num_base_bdevs": 3, 00:09:31.646 "num_base_bdevs_discovered": 3, 00:09:31.646 "num_base_bdevs_operational": 3, 00:09:31.646 "base_bdevs_list": [ 00:09:31.646 { 00:09:31.646 "name": "BaseBdev1", 00:09:31.646 "uuid": "51fc85a4-f7e1-4955-8b7a-0bd5f6dff79f", 00:09:31.646 "is_configured": true, 00:09:31.646 "data_offset": 2048, 00:09:31.646 "data_size": 63488 00:09:31.646 }, 00:09:31.646 { 00:09:31.646 "name": "BaseBdev2", 00:09:31.646 "uuid": "152e9a99-371a-4517-a490-9d2803459dfb", 00:09:31.646 "is_configured": true, 00:09:31.646 "data_offset": 2048, 00:09:31.646 "data_size": 63488 00:09:31.646 }, 00:09:31.646 { 00:09:31.646 "name": "BaseBdev3", 00:09:31.646 "uuid": "d7b68923-100c-4035-91e3-4a30a4660ca5", 00:09:31.646 "is_configured": true, 00:09:31.646 "data_offset": 2048, 00:09:31.646 "data_size": 63488 00:09:31.646 } 00:09:31.646 ] 00:09:31.646 } 00:09:31.646 } 00:09:31.646 }' 00:09:31.646 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:31.907 BaseBdev2 00:09:31.907 BaseBdev3' 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.907 [2024-11-19 10:20:45.581478] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.907 10:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.167 10:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.167 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.167 "name": "Existed_Raid", 00:09:32.167 "uuid": "fd907f48-909b-4387-98bc-2b4b71bbc266", 00:09:32.167 "strip_size_kb": 0, 00:09:32.167 "state": "online", 00:09:32.167 "raid_level": "raid1", 00:09:32.167 "superblock": true, 00:09:32.167 "num_base_bdevs": 3, 00:09:32.167 "num_base_bdevs_discovered": 2, 00:09:32.167 "num_base_bdevs_operational": 2, 00:09:32.167 "base_bdevs_list": [ 00:09:32.167 { 00:09:32.167 "name": null, 00:09:32.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.167 "is_configured": false, 00:09:32.167 "data_offset": 0, 00:09:32.167 "data_size": 63488 00:09:32.167 }, 00:09:32.167 { 00:09:32.167 "name": "BaseBdev2", 00:09:32.167 "uuid": "152e9a99-371a-4517-a490-9d2803459dfb", 00:09:32.167 "is_configured": true, 00:09:32.167 "data_offset": 2048, 00:09:32.167 "data_size": 63488 00:09:32.167 }, 00:09:32.167 { 00:09:32.167 "name": "BaseBdev3", 00:09:32.167 "uuid": "d7b68923-100c-4035-91e3-4a30a4660ca5", 00:09:32.167 "is_configured": true, 00:09:32.167 "data_offset": 2048, 00:09:32.167 "data_size": 63488 00:09:32.167 } 00:09:32.167 ] 00:09:32.167 }' 00:09:32.167 10:20:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.167 10:20:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.427 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:32.427 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:32.427 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.427 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:32.427 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.427 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.427 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.427 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:32.427 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:32.427 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:32.427 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.428 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.428 [2024-11-19 10:20:46.159639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:32.688 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.688 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:32.688 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:32.688 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.688 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:32.688 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.688 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.688 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.688 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:32.688 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:32.688 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:32.688 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.688 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.688 [2024-11-19 10:20:46.304979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:32.688 [2024-11-19 10:20:46.305092] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:32.688 [2024-11-19 10:20:46.396916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:32.688 [2024-11-19 10:20:46.396971] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:32.688 [2024-11-19 10:20:46.396984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:32.688 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.688 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:32.688 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:32.688 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.688 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.688 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:32.688 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.688 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.688 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:32.688 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:32.688 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:32.688 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:32.688 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:32.688 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:32.688 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.688 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.949 BaseBdev2 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.949 [ 00:09:32.949 { 00:09:32.949 "name": "BaseBdev2", 00:09:32.949 "aliases": [ 00:09:32.949 "dce0f400-8566-4db8-9041-aa13e10db606" 00:09:32.949 ], 00:09:32.949 "product_name": "Malloc disk", 00:09:32.949 "block_size": 512, 00:09:32.949 "num_blocks": 65536, 00:09:32.949 "uuid": "dce0f400-8566-4db8-9041-aa13e10db606", 00:09:32.949 "assigned_rate_limits": { 00:09:32.949 "rw_ios_per_sec": 0, 00:09:32.949 "rw_mbytes_per_sec": 0, 00:09:32.949 "r_mbytes_per_sec": 0, 00:09:32.949 "w_mbytes_per_sec": 0 00:09:32.949 }, 00:09:32.949 "claimed": false, 00:09:32.949 "zoned": false, 00:09:32.949 "supported_io_types": { 00:09:32.949 "read": true, 00:09:32.949 "write": true, 00:09:32.949 "unmap": true, 00:09:32.949 "flush": true, 00:09:32.949 "reset": true, 00:09:32.949 "nvme_admin": false, 00:09:32.949 "nvme_io": false, 00:09:32.949 "nvme_io_md": false, 00:09:32.949 "write_zeroes": true, 00:09:32.949 "zcopy": true, 00:09:32.949 "get_zone_info": false, 00:09:32.949 "zone_management": false, 00:09:32.949 "zone_append": false, 00:09:32.949 "compare": false, 00:09:32.949 "compare_and_write": false, 00:09:32.949 "abort": true, 00:09:32.949 "seek_hole": false, 00:09:32.949 "seek_data": false, 00:09:32.949 "copy": true, 00:09:32.949 "nvme_iov_md": false 00:09:32.949 }, 00:09:32.949 "memory_domains": [ 00:09:32.949 { 00:09:32.949 "dma_device_id": "system", 00:09:32.949 "dma_device_type": 1 00:09:32.949 }, 00:09:32.949 { 00:09:32.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.949 "dma_device_type": 2 00:09:32.949 } 00:09:32.949 ], 00:09:32.949 "driver_specific": {} 00:09:32.949 } 00:09:32.949 ] 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.949 BaseBdev3 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.949 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.950 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:32.950 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.950 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.950 [ 00:09:32.950 { 00:09:32.950 "name": "BaseBdev3", 00:09:32.950 "aliases": [ 00:09:32.950 "a5ab6054-d899-4530-be16-aafecd3d84b2" 00:09:32.950 ], 00:09:32.950 "product_name": "Malloc disk", 00:09:32.950 "block_size": 512, 00:09:32.950 "num_blocks": 65536, 00:09:32.950 "uuid": "a5ab6054-d899-4530-be16-aafecd3d84b2", 00:09:32.950 "assigned_rate_limits": { 00:09:32.950 "rw_ios_per_sec": 0, 00:09:32.950 "rw_mbytes_per_sec": 0, 00:09:32.950 "r_mbytes_per_sec": 0, 00:09:32.950 "w_mbytes_per_sec": 0 00:09:32.950 }, 00:09:32.950 "claimed": false, 00:09:32.950 "zoned": false, 00:09:32.950 "supported_io_types": { 00:09:32.950 "read": true, 00:09:32.950 "write": true, 00:09:32.950 "unmap": true, 00:09:32.950 "flush": true, 00:09:32.950 "reset": true, 00:09:32.950 "nvme_admin": false, 00:09:32.950 "nvme_io": false, 00:09:32.950 "nvme_io_md": false, 00:09:32.950 "write_zeroes": true, 00:09:32.950 "zcopy": true, 00:09:32.950 "get_zone_info": false, 00:09:32.950 "zone_management": false, 00:09:32.950 "zone_append": false, 00:09:32.950 "compare": false, 00:09:32.950 "compare_and_write": false, 00:09:32.950 "abort": true, 00:09:32.950 "seek_hole": false, 00:09:32.950 "seek_data": false, 00:09:32.950 "copy": true, 00:09:32.950 "nvme_iov_md": false 00:09:32.950 }, 00:09:32.950 "memory_domains": [ 00:09:32.950 { 00:09:32.950 "dma_device_id": "system", 00:09:32.950 "dma_device_type": 1 00:09:32.950 }, 00:09:32.950 { 00:09:32.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.950 "dma_device_type": 2 00:09:32.950 } 00:09:32.950 ], 00:09:32.950 "driver_specific": {} 00:09:32.950 } 00:09:32.950 ] 00:09:32.950 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.950 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:32.950 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:32.950 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:32.950 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:32.950 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.950 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.950 [2024-11-19 10:20:46.604907] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:32.950 [2024-11-19 10:20:46.605025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:32.950 [2024-11-19 10:20:46.605084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:32.950 [2024-11-19 10:20:46.606815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:32.950 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.950 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:32.950 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.950 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.950 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.950 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.950 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.950 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.950 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.950 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.950 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.950 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.950 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.950 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.950 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.950 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.950 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.950 "name": "Existed_Raid", 00:09:32.950 "uuid": "104a03e1-90eb-4f26-b2aa-bd7f9034f91c", 00:09:32.950 "strip_size_kb": 0, 00:09:32.950 "state": "configuring", 00:09:32.950 "raid_level": "raid1", 00:09:32.950 "superblock": true, 00:09:32.950 "num_base_bdevs": 3, 00:09:32.950 "num_base_bdevs_discovered": 2, 00:09:32.950 "num_base_bdevs_operational": 3, 00:09:32.950 "base_bdevs_list": [ 00:09:32.950 { 00:09:32.950 "name": "BaseBdev1", 00:09:32.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.950 "is_configured": false, 00:09:32.950 "data_offset": 0, 00:09:32.950 "data_size": 0 00:09:32.950 }, 00:09:32.950 { 00:09:32.950 "name": "BaseBdev2", 00:09:32.950 "uuid": "dce0f400-8566-4db8-9041-aa13e10db606", 00:09:32.950 "is_configured": true, 00:09:32.950 "data_offset": 2048, 00:09:32.950 "data_size": 63488 00:09:32.950 }, 00:09:32.950 { 00:09:32.950 "name": "BaseBdev3", 00:09:32.950 "uuid": "a5ab6054-d899-4530-be16-aafecd3d84b2", 00:09:32.950 "is_configured": true, 00:09:32.950 "data_offset": 2048, 00:09:32.950 "data_size": 63488 00:09:32.950 } 00:09:32.950 ] 00:09:32.950 }' 00:09:32.950 10:20:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.950 10:20:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.520 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:33.520 10:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.520 10:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.520 [2024-11-19 10:20:47.016200] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:33.520 10:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.520 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:33.520 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.520 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.520 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.520 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.520 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.520 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.520 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.520 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.520 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.520 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.520 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.520 10:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.520 10:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.520 10:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.520 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.520 "name": "Existed_Raid", 00:09:33.520 "uuid": "104a03e1-90eb-4f26-b2aa-bd7f9034f91c", 00:09:33.520 "strip_size_kb": 0, 00:09:33.520 "state": "configuring", 00:09:33.520 "raid_level": "raid1", 00:09:33.520 "superblock": true, 00:09:33.520 "num_base_bdevs": 3, 00:09:33.520 "num_base_bdevs_discovered": 1, 00:09:33.520 "num_base_bdevs_operational": 3, 00:09:33.520 "base_bdevs_list": [ 00:09:33.520 { 00:09:33.520 "name": "BaseBdev1", 00:09:33.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.520 "is_configured": false, 00:09:33.520 "data_offset": 0, 00:09:33.520 "data_size": 0 00:09:33.520 }, 00:09:33.520 { 00:09:33.520 "name": null, 00:09:33.520 "uuid": "dce0f400-8566-4db8-9041-aa13e10db606", 00:09:33.520 "is_configured": false, 00:09:33.520 "data_offset": 0, 00:09:33.520 "data_size": 63488 00:09:33.520 }, 00:09:33.520 { 00:09:33.520 "name": "BaseBdev3", 00:09:33.520 "uuid": "a5ab6054-d899-4530-be16-aafecd3d84b2", 00:09:33.520 "is_configured": true, 00:09:33.520 "data_offset": 2048, 00:09:33.520 "data_size": 63488 00:09:33.520 } 00:09:33.520 ] 00:09:33.520 }' 00:09:33.520 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.520 10:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.780 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.780 10:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.780 10:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.780 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:33.780 10:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.780 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:33.780 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:33.780 10:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.780 10:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.780 [2024-11-19 10:20:47.534120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.780 BaseBdev1 00:09:33.780 10:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.780 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:33.780 10:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:33.780 10:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.780 10:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:33.780 10:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.780 10:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.780 10:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:33.780 10:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.780 10:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.780 10:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.780 10:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:33.780 10:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.780 10:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.780 [ 00:09:33.780 { 00:09:34.040 "name": "BaseBdev1", 00:09:34.040 "aliases": [ 00:09:34.040 "8d0ac297-4a55-4efd-90ef-43132f44091b" 00:09:34.040 ], 00:09:34.040 "product_name": "Malloc disk", 00:09:34.040 "block_size": 512, 00:09:34.040 "num_blocks": 65536, 00:09:34.040 "uuid": "8d0ac297-4a55-4efd-90ef-43132f44091b", 00:09:34.040 "assigned_rate_limits": { 00:09:34.040 "rw_ios_per_sec": 0, 00:09:34.040 "rw_mbytes_per_sec": 0, 00:09:34.040 "r_mbytes_per_sec": 0, 00:09:34.040 "w_mbytes_per_sec": 0 00:09:34.040 }, 00:09:34.040 "claimed": true, 00:09:34.040 "claim_type": "exclusive_write", 00:09:34.040 "zoned": false, 00:09:34.040 "supported_io_types": { 00:09:34.040 "read": true, 00:09:34.040 "write": true, 00:09:34.040 "unmap": true, 00:09:34.040 "flush": true, 00:09:34.040 "reset": true, 00:09:34.040 "nvme_admin": false, 00:09:34.040 "nvme_io": false, 00:09:34.040 "nvme_io_md": false, 00:09:34.040 "write_zeroes": true, 00:09:34.040 "zcopy": true, 00:09:34.040 "get_zone_info": false, 00:09:34.040 "zone_management": false, 00:09:34.040 "zone_append": false, 00:09:34.040 "compare": false, 00:09:34.040 "compare_and_write": false, 00:09:34.040 "abort": true, 00:09:34.040 "seek_hole": false, 00:09:34.040 "seek_data": false, 00:09:34.040 "copy": true, 00:09:34.040 "nvme_iov_md": false 00:09:34.040 }, 00:09:34.040 "memory_domains": [ 00:09:34.040 { 00:09:34.040 "dma_device_id": "system", 00:09:34.040 "dma_device_type": 1 00:09:34.040 }, 00:09:34.040 { 00:09:34.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.040 "dma_device_type": 2 00:09:34.040 } 00:09:34.040 ], 00:09:34.040 "driver_specific": {} 00:09:34.040 } 00:09:34.040 ] 00:09:34.040 10:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.040 10:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:34.040 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:34.040 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.040 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.040 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.040 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.040 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.040 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.041 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.041 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.041 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.041 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.041 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.041 10:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.041 10:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.041 10:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.041 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.041 "name": "Existed_Raid", 00:09:34.041 "uuid": "104a03e1-90eb-4f26-b2aa-bd7f9034f91c", 00:09:34.041 "strip_size_kb": 0, 00:09:34.041 "state": "configuring", 00:09:34.041 "raid_level": "raid1", 00:09:34.041 "superblock": true, 00:09:34.041 "num_base_bdevs": 3, 00:09:34.041 "num_base_bdevs_discovered": 2, 00:09:34.041 "num_base_bdevs_operational": 3, 00:09:34.041 "base_bdevs_list": [ 00:09:34.041 { 00:09:34.041 "name": "BaseBdev1", 00:09:34.041 "uuid": "8d0ac297-4a55-4efd-90ef-43132f44091b", 00:09:34.041 "is_configured": true, 00:09:34.041 "data_offset": 2048, 00:09:34.041 "data_size": 63488 00:09:34.041 }, 00:09:34.041 { 00:09:34.041 "name": null, 00:09:34.041 "uuid": "dce0f400-8566-4db8-9041-aa13e10db606", 00:09:34.041 "is_configured": false, 00:09:34.041 "data_offset": 0, 00:09:34.041 "data_size": 63488 00:09:34.041 }, 00:09:34.041 { 00:09:34.041 "name": "BaseBdev3", 00:09:34.041 "uuid": "a5ab6054-d899-4530-be16-aafecd3d84b2", 00:09:34.041 "is_configured": true, 00:09:34.041 "data_offset": 2048, 00:09:34.041 "data_size": 63488 00:09:34.041 } 00:09:34.041 ] 00:09:34.041 }' 00:09:34.041 10:20:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.041 10:20:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.300 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:34.300 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.300 10:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.300 10:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.300 10:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.300 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:34.300 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:34.300 10:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.300 10:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.300 [2024-11-19 10:20:48.061246] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:34.300 10:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.300 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:34.300 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.300 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.300 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.300 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.300 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.300 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.300 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.300 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.300 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.300 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.300 10:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.300 10:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.300 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.560 10:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.560 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.560 "name": "Existed_Raid", 00:09:34.560 "uuid": "104a03e1-90eb-4f26-b2aa-bd7f9034f91c", 00:09:34.560 "strip_size_kb": 0, 00:09:34.560 "state": "configuring", 00:09:34.560 "raid_level": "raid1", 00:09:34.560 "superblock": true, 00:09:34.560 "num_base_bdevs": 3, 00:09:34.560 "num_base_bdevs_discovered": 1, 00:09:34.560 "num_base_bdevs_operational": 3, 00:09:34.560 "base_bdevs_list": [ 00:09:34.560 { 00:09:34.560 "name": "BaseBdev1", 00:09:34.560 "uuid": "8d0ac297-4a55-4efd-90ef-43132f44091b", 00:09:34.560 "is_configured": true, 00:09:34.560 "data_offset": 2048, 00:09:34.560 "data_size": 63488 00:09:34.560 }, 00:09:34.560 { 00:09:34.560 "name": null, 00:09:34.560 "uuid": "dce0f400-8566-4db8-9041-aa13e10db606", 00:09:34.560 "is_configured": false, 00:09:34.560 "data_offset": 0, 00:09:34.560 "data_size": 63488 00:09:34.560 }, 00:09:34.560 { 00:09:34.560 "name": null, 00:09:34.560 "uuid": "a5ab6054-d899-4530-be16-aafecd3d84b2", 00:09:34.560 "is_configured": false, 00:09:34.560 "data_offset": 0, 00:09:34.560 "data_size": 63488 00:09:34.560 } 00:09:34.560 ] 00:09:34.560 }' 00:09:34.560 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.560 10:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.821 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.821 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:34.821 10:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.821 10:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.821 10:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.821 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:34.821 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:34.821 10:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.821 10:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.821 [2024-11-19 10:20:48.504572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:34.821 10:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.821 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:34.821 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.821 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.821 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.821 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.821 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.821 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.821 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.821 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.821 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.821 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.821 10:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.821 10:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.821 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.821 10:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.821 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.821 "name": "Existed_Raid", 00:09:34.821 "uuid": "104a03e1-90eb-4f26-b2aa-bd7f9034f91c", 00:09:34.821 "strip_size_kb": 0, 00:09:34.821 "state": "configuring", 00:09:34.821 "raid_level": "raid1", 00:09:34.821 "superblock": true, 00:09:34.821 "num_base_bdevs": 3, 00:09:34.821 "num_base_bdevs_discovered": 2, 00:09:34.821 "num_base_bdevs_operational": 3, 00:09:34.821 "base_bdevs_list": [ 00:09:34.821 { 00:09:34.821 "name": "BaseBdev1", 00:09:34.821 "uuid": "8d0ac297-4a55-4efd-90ef-43132f44091b", 00:09:34.821 "is_configured": true, 00:09:34.821 "data_offset": 2048, 00:09:34.821 "data_size": 63488 00:09:34.821 }, 00:09:34.821 { 00:09:34.821 "name": null, 00:09:34.821 "uuid": "dce0f400-8566-4db8-9041-aa13e10db606", 00:09:34.821 "is_configured": false, 00:09:34.821 "data_offset": 0, 00:09:34.821 "data_size": 63488 00:09:34.821 }, 00:09:34.821 { 00:09:34.821 "name": "BaseBdev3", 00:09:34.821 "uuid": "a5ab6054-d899-4530-be16-aafecd3d84b2", 00:09:34.821 "is_configured": true, 00:09:34.821 "data_offset": 2048, 00:09:34.821 "data_size": 63488 00:09:34.821 } 00:09:34.821 ] 00:09:34.821 }' 00:09:34.821 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.821 10:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.391 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.391 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:35.391 10:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.391 10:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.391 10:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.391 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:35.391 10:20:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:35.391 10:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.391 10:20:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.391 [2024-11-19 10:20:48.951835] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:35.391 10:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.391 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:35.391 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.391 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.391 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.391 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.391 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.391 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.391 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.391 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.391 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.391 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.391 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.391 10:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.391 10:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.391 10:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.391 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.391 "name": "Existed_Raid", 00:09:35.391 "uuid": "104a03e1-90eb-4f26-b2aa-bd7f9034f91c", 00:09:35.391 "strip_size_kb": 0, 00:09:35.391 "state": "configuring", 00:09:35.391 "raid_level": "raid1", 00:09:35.391 "superblock": true, 00:09:35.391 "num_base_bdevs": 3, 00:09:35.391 "num_base_bdevs_discovered": 1, 00:09:35.391 "num_base_bdevs_operational": 3, 00:09:35.391 "base_bdevs_list": [ 00:09:35.391 { 00:09:35.391 "name": null, 00:09:35.391 "uuid": "8d0ac297-4a55-4efd-90ef-43132f44091b", 00:09:35.391 "is_configured": false, 00:09:35.391 "data_offset": 0, 00:09:35.391 "data_size": 63488 00:09:35.391 }, 00:09:35.391 { 00:09:35.391 "name": null, 00:09:35.391 "uuid": "dce0f400-8566-4db8-9041-aa13e10db606", 00:09:35.391 "is_configured": false, 00:09:35.391 "data_offset": 0, 00:09:35.391 "data_size": 63488 00:09:35.391 }, 00:09:35.391 { 00:09:35.391 "name": "BaseBdev3", 00:09:35.391 "uuid": "a5ab6054-d899-4530-be16-aafecd3d84b2", 00:09:35.391 "is_configured": true, 00:09:35.391 "data_offset": 2048, 00:09:35.391 "data_size": 63488 00:09:35.391 } 00:09:35.391 ] 00:09:35.391 }' 00:09:35.391 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.391 10:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.960 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.960 10:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.960 10:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.960 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:35.960 10:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.960 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:35.960 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:35.960 10:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.960 10:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.960 [2024-11-19 10:20:49.544124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:35.960 10:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.960 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:35.960 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.960 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.960 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.960 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.960 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.960 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.960 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.960 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.960 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.960 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.960 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.960 10:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.960 10:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.960 10:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.960 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.960 "name": "Existed_Raid", 00:09:35.960 "uuid": "104a03e1-90eb-4f26-b2aa-bd7f9034f91c", 00:09:35.960 "strip_size_kb": 0, 00:09:35.960 "state": "configuring", 00:09:35.960 "raid_level": "raid1", 00:09:35.960 "superblock": true, 00:09:35.960 "num_base_bdevs": 3, 00:09:35.960 "num_base_bdevs_discovered": 2, 00:09:35.960 "num_base_bdevs_operational": 3, 00:09:35.960 "base_bdevs_list": [ 00:09:35.960 { 00:09:35.960 "name": null, 00:09:35.960 "uuid": "8d0ac297-4a55-4efd-90ef-43132f44091b", 00:09:35.960 "is_configured": false, 00:09:35.960 "data_offset": 0, 00:09:35.960 "data_size": 63488 00:09:35.960 }, 00:09:35.960 { 00:09:35.960 "name": "BaseBdev2", 00:09:35.960 "uuid": "dce0f400-8566-4db8-9041-aa13e10db606", 00:09:35.960 "is_configured": true, 00:09:35.960 "data_offset": 2048, 00:09:35.960 "data_size": 63488 00:09:35.960 }, 00:09:35.960 { 00:09:35.960 "name": "BaseBdev3", 00:09:35.960 "uuid": "a5ab6054-d899-4530-be16-aafecd3d84b2", 00:09:35.960 "is_configured": true, 00:09:35.960 "data_offset": 2048, 00:09:35.960 "data_size": 63488 00:09:35.960 } 00:09:35.960 ] 00:09:35.960 }' 00:09:35.960 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.960 10:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.280 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:36.280 10:20:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.280 10:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.280 10:20:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.280 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8d0ac297-4a55-4efd-90ef-43132f44091b 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.566 [2024-11-19 10:20:50.104114] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:36.566 [2024-11-19 10:20:50.104387] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:36.566 [2024-11-19 10:20:50.104435] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:36.566 [2024-11-19 10:20:50.104692] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:36.566 [2024-11-19 10:20:50.104881] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:36.566 [2024-11-19 10:20:50.104926] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:36.566 NewBaseBdev 00:09:36.566 [2024-11-19 10:20:50.105115] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.566 [ 00:09:36.566 { 00:09:36.566 "name": "NewBaseBdev", 00:09:36.566 "aliases": [ 00:09:36.566 "8d0ac297-4a55-4efd-90ef-43132f44091b" 00:09:36.566 ], 00:09:36.566 "product_name": "Malloc disk", 00:09:36.566 "block_size": 512, 00:09:36.566 "num_blocks": 65536, 00:09:36.566 "uuid": "8d0ac297-4a55-4efd-90ef-43132f44091b", 00:09:36.566 "assigned_rate_limits": { 00:09:36.566 "rw_ios_per_sec": 0, 00:09:36.566 "rw_mbytes_per_sec": 0, 00:09:36.566 "r_mbytes_per_sec": 0, 00:09:36.566 "w_mbytes_per_sec": 0 00:09:36.566 }, 00:09:36.566 "claimed": true, 00:09:36.566 "claim_type": "exclusive_write", 00:09:36.566 "zoned": false, 00:09:36.566 "supported_io_types": { 00:09:36.566 "read": true, 00:09:36.566 "write": true, 00:09:36.566 "unmap": true, 00:09:36.566 "flush": true, 00:09:36.566 "reset": true, 00:09:36.566 "nvme_admin": false, 00:09:36.566 "nvme_io": false, 00:09:36.566 "nvme_io_md": false, 00:09:36.566 "write_zeroes": true, 00:09:36.566 "zcopy": true, 00:09:36.566 "get_zone_info": false, 00:09:36.566 "zone_management": false, 00:09:36.566 "zone_append": false, 00:09:36.566 "compare": false, 00:09:36.566 "compare_and_write": false, 00:09:36.566 "abort": true, 00:09:36.566 "seek_hole": false, 00:09:36.566 "seek_data": false, 00:09:36.566 "copy": true, 00:09:36.566 "nvme_iov_md": false 00:09:36.566 }, 00:09:36.566 "memory_domains": [ 00:09:36.566 { 00:09:36.566 "dma_device_id": "system", 00:09:36.566 "dma_device_type": 1 00:09:36.566 }, 00:09:36.566 { 00:09:36.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.566 "dma_device_type": 2 00:09:36.566 } 00:09:36.566 ], 00:09:36.566 "driver_specific": {} 00:09:36.566 } 00:09:36.566 ] 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.566 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.566 "name": "Existed_Raid", 00:09:36.566 "uuid": "104a03e1-90eb-4f26-b2aa-bd7f9034f91c", 00:09:36.566 "strip_size_kb": 0, 00:09:36.566 "state": "online", 00:09:36.566 "raid_level": "raid1", 00:09:36.566 "superblock": true, 00:09:36.566 "num_base_bdevs": 3, 00:09:36.566 "num_base_bdevs_discovered": 3, 00:09:36.566 "num_base_bdevs_operational": 3, 00:09:36.566 "base_bdevs_list": [ 00:09:36.566 { 00:09:36.566 "name": "NewBaseBdev", 00:09:36.566 "uuid": "8d0ac297-4a55-4efd-90ef-43132f44091b", 00:09:36.566 "is_configured": true, 00:09:36.566 "data_offset": 2048, 00:09:36.566 "data_size": 63488 00:09:36.566 }, 00:09:36.566 { 00:09:36.566 "name": "BaseBdev2", 00:09:36.566 "uuid": "dce0f400-8566-4db8-9041-aa13e10db606", 00:09:36.566 "is_configured": true, 00:09:36.566 "data_offset": 2048, 00:09:36.567 "data_size": 63488 00:09:36.567 }, 00:09:36.567 { 00:09:36.567 "name": "BaseBdev3", 00:09:36.567 "uuid": "a5ab6054-d899-4530-be16-aafecd3d84b2", 00:09:36.567 "is_configured": true, 00:09:36.567 "data_offset": 2048, 00:09:36.567 "data_size": 63488 00:09:36.567 } 00:09:36.567 ] 00:09:36.567 }' 00:09:36.567 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.567 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.137 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:37.137 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:37.137 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:37.137 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:37.137 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:37.137 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:37.137 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:37.137 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.137 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.137 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:37.137 [2024-11-19 10:20:50.631520] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:37.137 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.137 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:37.137 "name": "Existed_Raid", 00:09:37.137 "aliases": [ 00:09:37.137 "104a03e1-90eb-4f26-b2aa-bd7f9034f91c" 00:09:37.137 ], 00:09:37.137 "product_name": "Raid Volume", 00:09:37.137 "block_size": 512, 00:09:37.137 "num_blocks": 63488, 00:09:37.137 "uuid": "104a03e1-90eb-4f26-b2aa-bd7f9034f91c", 00:09:37.137 "assigned_rate_limits": { 00:09:37.137 "rw_ios_per_sec": 0, 00:09:37.137 "rw_mbytes_per_sec": 0, 00:09:37.137 "r_mbytes_per_sec": 0, 00:09:37.137 "w_mbytes_per_sec": 0 00:09:37.137 }, 00:09:37.137 "claimed": false, 00:09:37.137 "zoned": false, 00:09:37.137 "supported_io_types": { 00:09:37.137 "read": true, 00:09:37.137 "write": true, 00:09:37.137 "unmap": false, 00:09:37.137 "flush": false, 00:09:37.137 "reset": true, 00:09:37.137 "nvme_admin": false, 00:09:37.137 "nvme_io": false, 00:09:37.137 "nvme_io_md": false, 00:09:37.137 "write_zeroes": true, 00:09:37.137 "zcopy": false, 00:09:37.137 "get_zone_info": false, 00:09:37.137 "zone_management": false, 00:09:37.137 "zone_append": false, 00:09:37.137 "compare": false, 00:09:37.137 "compare_and_write": false, 00:09:37.137 "abort": false, 00:09:37.137 "seek_hole": false, 00:09:37.137 "seek_data": false, 00:09:37.137 "copy": false, 00:09:37.137 "nvme_iov_md": false 00:09:37.137 }, 00:09:37.137 "memory_domains": [ 00:09:37.137 { 00:09:37.137 "dma_device_id": "system", 00:09:37.137 "dma_device_type": 1 00:09:37.137 }, 00:09:37.137 { 00:09:37.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.137 "dma_device_type": 2 00:09:37.137 }, 00:09:37.137 { 00:09:37.137 "dma_device_id": "system", 00:09:37.137 "dma_device_type": 1 00:09:37.137 }, 00:09:37.137 { 00:09:37.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.137 "dma_device_type": 2 00:09:37.137 }, 00:09:37.137 { 00:09:37.137 "dma_device_id": "system", 00:09:37.137 "dma_device_type": 1 00:09:37.137 }, 00:09:37.137 { 00:09:37.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.137 "dma_device_type": 2 00:09:37.138 } 00:09:37.138 ], 00:09:37.138 "driver_specific": { 00:09:37.138 "raid": { 00:09:37.138 "uuid": "104a03e1-90eb-4f26-b2aa-bd7f9034f91c", 00:09:37.138 "strip_size_kb": 0, 00:09:37.138 "state": "online", 00:09:37.138 "raid_level": "raid1", 00:09:37.138 "superblock": true, 00:09:37.138 "num_base_bdevs": 3, 00:09:37.138 "num_base_bdevs_discovered": 3, 00:09:37.138 "num_base_bdevs_operational": 3, 00:09:37.138 "base_bdevs_list": [ 00:09:37.138 { 00:09:37.138 "name": "NewBaseBdev", 00:09:37.138 "uuid": "8d0ac297-4a55-4efd-90ef-43132f44091b", 00:09:37.138 "is_configured": true, 00:09:37.138 "data_offset": 2048, 00:09:37.138 "data_size": 63488 00:09:37.138 }, 00:09:37.138 { 00:09:37.138 "name": "BaseBdev2", 00:09:37.138 "uuid": "dce0f400-8566-4db8-9041-aa13e10db606", 00:09:37.138 "is_configured": true, 00:09:37.138 "data_offset": 2048, 00:09:37.138 "data_size": 63488 00:09:37.138 }, 00:09:37.138 { 00:09:37.138 "name": "BaseBdev3", 00:09:37.138 "uuid": "a5ab6054-d899-4530-be16-aafecd3d84b2", 00:09:37.138 "is_configured": true, 00:09:37.138 "data_offset": 2048, 00:09:37.138 "data_size": 63488 00:09:37.138 } 00:09:37.138 ] 00:09:37.138 } 00:09:37.138 } 00:09:37.138 }' 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:37.138 BaseBdev2 00:09:37.138 BaseBdev3' 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.138 [2024-11-19 10:20:50.902816] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:37.138 [2024-11-19 10:20:50.902887] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:37.138 [2024-11-19 10:20:50.902969] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:37.138 [2024-11-19 10:20:50.903272] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:37.138 [2024-11-19 10:20:50.903327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67828 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 67828 ']' 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 67828 00:09:37.138 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:37.398 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:37.398 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67828 00:09:37.398 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:37.398 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:37.398 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67828' 00:09:37.398 killing process with pid 67828 00:09:37.398 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 67828 00:09:37.398 [2024-11-19 10:20:50.951634] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:37.398 10:20:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 67828 00:09:37.658 [2024-11-19 10:20:51.236543] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:38.598 10:20:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:38.598 00:09:38.598 real 0m10.345s 00:09:38.598 user 0m16.560s 00:09:38.598 sys 0m1.758s 00:09:38.598 10:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.598 ************************************ 00:09:38.598 END TEST raid_state_function_test_sb 00:09:38.598 ************************************ 00:09:38.598 10:20:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.598 10:20:52 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:38.598 10:20:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:38.598 10:20:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.598 10:20:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:38.598 ************************************ 00:09:38.598 START TEST raid_superblock_test 00:09:38.598 ************************************ 00:09:38.598 10:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:09:38.598 10:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:38.598 10:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:38.598 10:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:38.598 10:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:38.598 10:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:38.598 10:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:38.598 10:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:38.598 10:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:38.598 10:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:38.598 10:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:38.598 10:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:38.598 10:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:38.598 10:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:38.598 10:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:38.598 10:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:38.598 10:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68454 00:09:38.598 10:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:38.598 10:20:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68454 00:09:38.598 10:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68454 ']' 00:09:38.598 10:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.599 10:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.599 10:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.599 10:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.599 10:20:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.858 [2024-11-19 10:20:52.428225] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:09:38.858 [2024-11-19 10:20:52.428443] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68454 ] 00:09:38.858 [2024-11-19 10:20:52.600547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.118 [2024-11-19 10:20:52.705122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.118 [2024-11-19 10:20:52.891382] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.118 [2024-11-19 10:20:52.891434] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.688 malloc1 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.688 [2024-11-19 10:20:53.284401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:39.688 [2024-11-19 10:20:53.284520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.688 [2024-11-19 10:20:53.284563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:39.688 [2024-11-19 10:20:53.284592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.688 [2024-11-19 10:20:53.286653] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.688 [2024-11-19 10:20:53.286722] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:39.688 pt1 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.688 malloc2 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.688 [2024-11-19 10:20:53.343279] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:39.688 [2024-11-19 10:20:53.343334] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.688 [2024-11-19 10:20:53.343372] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:39.688 [2024-11-19 10:20:53.343381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.688 [2024-11-19 10:20:53.345488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.688 [2024-11-19 10:20:53.345527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:39.688 pt2 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:39.688 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.689 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.689 malloc3 00:09:39.689 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.689 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:39.689 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.689 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.689 [2024-11-19 10:20:53.408831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:39.689 [2024-11-19 10:20:53.408920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.689 [2024-11-19 10:20:53.408981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:39.689 [2024-11-19 10:20:53.409028] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.689 [2024-11-19 10:20:53.411020] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.689 [2024-11-19 10:20:53.411082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:39.689 pt3 00:09:39.689 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.689 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:39.689 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:39.689 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:39.689 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.689 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.689 [2024-11-19 10:20:53.420852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:39.689 [2024-11-19 10:20:53.422608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:39.689 [2024-11-19 10:20:53.422726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:39.689 [2024-11-19 10:20:53.422902] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:39.689 [2024-11-19 10:20:53.422959] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:39.689 [2024-11-19 10:20:53.423232] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:39.689 [2024-11-19 10:20:53.423446] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:39.689 [2024-11-19 10:20:53.423492] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:39.689 [2024-11-19 10:20:53.423671] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.689 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.689 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:39.689 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:39.689 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.689 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.689 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.689 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.689 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.689 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.689 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.689 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.689 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.689 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.689 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.689 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.689 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.949 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.949 "name": "raid_bdev1", 00:09:39.949 "uuid": "c33ca207-46f0-42da-b7d9-e5f00f70c98a", 00:09:39.949 "strip_size_kb": 0, 00:09:39.949 "state": "online", 00:09:39.949 "raid_level": "raid1", 00:09:39.949 "superblock": true, 00:09:39.949 "num_base_bdevs": 3, 00:09:39.949 "num_base_bdevs_discovered": 3, 00:09:39.949 "num_base_bdevs_operational": 3, 00:09:39.949 "base_bdevs_list": [ 00:09:39.949 { 00:09:39.949 "name": "pt1", 00:09:39.949 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:39.949 "is_configured": true, 00:09:39.949 "data_offset": 2048, 00:09:39.949 "data_size": 63488 00:09:39.949 }, 00:09:39.949 { 00:09:39.949 "name": "pt2", 00:09:39.949 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:39.949 "is_configured": true, 00:09:39.949 "data_offset": 2048, 00:09:39.949 "data_size": 63488 00:09:39.949 }, 00:09:39.949 { 00:09:39.949 "name": "pt3", 00:09:39.949 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:39.949 "is_configured": true, 00:09:39.949 "data_offset": 2048, 00:09:39.949 "data_size": 63488 00:09:39.949 } 00:09:39.949 ] 00:09:39.949 }' 00:09:39.949 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.949 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.210 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:40.210 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:40.210 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:40.210 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:40.210 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:40.210 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:40.210 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:40.210 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:40.210 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.210 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.210 [2024-11-19 10:20:53.836446] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.210 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.210 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:40.210 "name": "raid_bdev1", 00:09:40.210 "aliases": [ 00:09:40.210 "c33ca207-46f0-42da-b7d9-e5f00f70c98a" 00:09:40.210 ], 00:09:40.210 "product_name": "Raid Volume", 00:09:40.210 "block_size": 512, 00:09:40.210 "num_blocks": 63488, 00:09:40.210 "uuid": "c33ca207-46f0-42da-b7d9-e5f00f70c98a", 00:09:40.210 "assigned_rate_limits": { 00:09:40.210 "rw_ios_per_sec": 0, 00:09:40.210 "rw_mbytes_per_sec": 0, 00:09:40.210 "r_mbytes_per_sec": 0, 00:09:40.210 "w_mbytes_per_sec": 0 00:09:40.210 }, 00:09:40.210 "claimed": false, 00:09:40.210 "zoned": false, 00:09:40.210 "supported_io_types": { 00:09:40.210 "read": true, 00:09:40.210 "write": true, 00:09:40.210 "unmap": false, 00:09:40.210 "flush": false, 00:09:40.210 "reset": true, 00:09:40.210 "nvme_admin": false, 00:09:40.210 "nvme_io": false, 00:09:40.210 "nvme_io_md": false, 00:09:40.210 "write_zeroes": true, 00:09:40.210 "zcopy": false, 00:09:40.210 "get_zone_info": false, 00:09:40.210 "zone_management": false, 00:09:40.210 "zone_append": false, 00:09:40.210 "compare": false, 00:09:40.210 "compare_and_write": false, 00:09:40.210 "abort": false, 00:09:40.210 "seek_hole": false, 00:09:40.210 "seek_data": false, 00:09:40.210 "copy": false, 00:09:40.210 "nvme_iov_md": false 00:09:40.210 }, 00:09:40.210 "memory_domains": [ 00:09:40.210 { 00:09:40.210 "dma_device_id": "system", 00:09:40.210 "dma_device_type": 1 00:09:40.210 }, 00:09:40.210 { 00:09:40.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.210 "dma_device_type": 2 00:09:40.210 }, 00:09:40.210 { 00:09:40.210 "dma_device_id": "system", 00:09:40.210 "dma_device_type": 1 00:09:40.210 }, 00:09:40.210 { 00:09:40.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.210 "dma_device_type": 2 00:09:40.210 }, 00:09:40.210 { 00:09:40.210 "dma_device_id": "system", 00:09:40.210 "dma_device_type": 1 00:09:40.210 }, 00:09:40.210 { 00:09:40.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.210 "dma_device_type": 2 00:09:40.210 } 00:09:40.210 ], 00:09:40.210 "driver_specific": { 00:09:40.210 "raid": { 00:09:40.210 "uuid": "c33ca207-46f0-42da-b7d9-e5f00f70c98a", 00:09:40.210 "strip_size_kb": 0, 00:09:40.210 "state": "online", 00:09:40.210 "raid_level": "raid1", 00:09:40.210 "superblock": true, 00:09:40.210 "num_base_bdevs": 3, 00:09:40.210 "num_base_bdevs_discovered": 3, 00:09:40.210 "num_base_bdevs_operational": 3, 00:09:40.210 "base_bdevs_list": [ 00:09:40.210 { 00:09:40.210 "name": "pt1", 00:09:40.210 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:40.210 "is_configured": true, 00:09:40.210 "data_offset": 2048, 00:09:40.210 "data_size": 63488 00:09:40.210 }, 00:09:40.210 { 00:09:40.210 "name": "pt2", 00:09:40.210 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:40.210 "is_configured": true, 00:09:40.210 "data_offset": 2048, 00:09:40.210 "data_size": 63488 00:09:40.210 }, 00:09:40.210 { 00:09:40.210 "name": "pt3", 00:09:40.210 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:40.210 "is_configured": true, 00:09:40.210 "data_offset": 2048, 00:09:40.210 "data_size": 63488 00:09:40.210 } 00:09:40.210 ] 00:09:40.210 } 00:09:40.210 } 00:09:40.210 }' 00:09:40.210 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:40.210 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:40.210 pt2 00:09:40.210 pt3' 00:09:40.210 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.210 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:40.210 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.210 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:40.210 10:20:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.210 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.210 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.210 10:20:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:40.471 [2024-11-19 10:20:54.111888] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c33ca207-46f0-42da-b7d9-e5f00f70c98a 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c33ca207-46f0-42da-b7d9-e5f00f70c98a ']' 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.471 [2024-11-19 10:20:54.155550] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:40.471 [2024-11-19 10:20:54.155576] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.471 [2024-11-19 10:20:54.155646] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.471 [2024-11-19 10:20:54.155715] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.471 [2024-11-19 10:20:54.155725] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.471 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.730 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.731 [2024-11-19 10:20:54.311385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:40.731 [2024-11-19 10:20:54.313153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:40.731 [2024-11-19 10:20:54.313203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:40.731 [2024-11-19 10:20:54.313252] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:40.731 [2024-11-19 10:20:54.313323] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:40.731 [2024-11-19 10:20:54.313341] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:40.731 [2024-11-19 10:20:54.313358] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:40.731 [2024-11-19 10:20:54.313368] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:40.731 request: 00:09:40.731 { 00:09:40.731 "name": "raid_bdev1", 00:09:40.731 "raid_level": "raid1", 00:09:40.731 "base_bdevs": [ 00:09:40.731 "malloc1", 00:09:40.731 "malloc2", 00:09:40.731 "malloc3" 00:09:40.731 ], 00:09:40.731 "superblock": false, 00:09:40.731 "method": "bdev_raid_create", 00:09:40.731 "req_id": 1 00:09:40.731 } 00:09:40.731 Got JSON-RPC error response 00:09:40.731 response: 00:09:40.731 { 00:09:40.731 "code": -17, 00:09:40.731 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:40.731 } 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.731 [2024-11-19 10:20:54.375200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:40.731 [2024-11-19 10:20:54.375318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:40.731 [2024-11-19 10:20:54.375350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:40.731 [2024-11-19 10:20:54.375359] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:40.731 [2024-11-19 10:20:54.377503] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:40.731 [2024-11-19 10:20:54.377537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:40.731 [2024-11-19 10:20:54.377620] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:40.731 [2024-11-19 10:20:54.377668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:40.731 pt1 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.731 "name": "raid_bdev1", 00:09:40.731 "uuid": "c33ca207-46f0-42da-b7d9-e5f00f70c98a", 00:09:40.731 "strip_size_kb": 0, 00:09:40.731 "state": "configuring", 00:09:40.731 "raid_level": "raid1", 00:09:40.731 "superblock": true, 00:09:40.731 "num_base_bdevs": 3, 00:09:40.731 "num_base_bdevs_discovered": 1, 00:09:40.731 "num_base_bdevs_operational": 3, 00:09:40.731 "base_bdevs_list": [ 00:09:40.731 { 00:09:40.731 "name": "pt1", 00:09:40.731 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:40.731 "is_configured": true, 00:09:40.731 "data_offset": 2048, 00:09:40.731 "data_size": 63488 00:09:40.731 }, 00:09:40.731 { 00:09:40.731 "name": null, 00:09:40.731 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:40.731 "is_configured": false, 00:09:40.731 "data_offset": 2048, 00:09:40.731 "data_size": 63488 00:09:40.731 }, 00:09:40.731 { 00:09:40.731 "name": null, 00:09:40.731 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:40.731 "is_configured": false, 00:09:40.731 "data_offset": 2048, 00:09:40.731 "data_size": 63488 00:09:40.731 } 00:09:40.731 ] 00:09:40.731 }' 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.731 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.300 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:41.300 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:41.300 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.300 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.300 [2024-11-19 10:20:54.834449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:41.300 [2024-11-19 10:20:54.834585] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.300 [2024-11-19 10:20:54.834627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:41.300 [2024-11-19 10:20:54.834658] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.300 [2024-11-19 10:20:54.835195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.300 [2024-11-19 10:20:54.835254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:41.300 [2024-11-19 10:20:54.835371] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:41.300 [2024-11-19 10:20:54.835424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:41.300 pt2 00:09:41.300 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.300 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:41.300 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.300 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.300 [2024-11-19 10:20:54.846458] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:41.300 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.300 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:41.300 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.300 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:41.300 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.300 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.300 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.300 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.300 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.300 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.300 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.300 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.300 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.300 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.300 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.300 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.300 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.300 "name": "raid_bdev1", 00:09:41.300 "uuid": "c33ca207-46f0-42da-b7d9-e5f00f70c98a", 00:09:41.300 "strip_size_kb": 0, 00:09:41.300 "state": "configuring", 00:09:41.300 "raid_level": "raid1", 00:09:41.300 "superblock": true, 00:09:41.300 "num_base_bdevs": 3, 00:09:41.300 "num_base_bdevs_discovered": 1, 00:09:41.300 "num_base_bdevs_operational": 3, 00:09:41.300 "base_bdevs_list": [ 00:09:41.300 { 00:09:41.300 "name": "pt1", 00:09:41.300 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:41.300 "is_configured": true, 00:09:41.300 "data_offset": 2048, 00:09:41.300 "data_size": 63488 00:09:41.300 }, 00:09:41.300 { 00:09:41.300 "name": null, 00:09:41.300 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:41.300 "is_configured": false, 00:09:41.300 "data_offset": 0, 00:09:41.300 "data_size": 63488 00:09:41.300 }, 00:09:41.300 { 00:09:41.300 "name": null, 00:09:41.300 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:41.300 "is_configured": false, 00:09:41.300 "data_offset": 2048, 00:09:41.300 "data_size": 63488 00:09:41.300 } 00:09:41.300 ] 00:09:41.300 }' 00:09:41.300 10:20:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.300 10:20:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.560 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:41.560 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:41.560 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:41.560 10:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.560 10:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.560 [2024-11-19 10:20:55.249714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:41.560 [2024-11-19 10:20:55.249784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.560 [2024-11-19 10:20:55.249803] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:41.560 [2024-11-19 10:20:55.249813] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.560 [2024-11-19 10:20:55.250277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.560 [2024-11-19 10:20:55.250299] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:41.560 [2024-11-19 10:20:55.250379] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:41.560 [2024-11-19 10:20:55.250420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:41.560 pt2 00:09:41.560 10:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.560 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:41.560 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:41.560 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:41.560 10:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.560 10:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.560 [2024-11-19 10:20:55.261662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:41.560 [2024-11-19 10:20:55.261754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.560 [2024-11-19 10:20:55.261777] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:41.560 [2024-11-19 10:20:55.261790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.560 [2024-11-19 10:20:55.262177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.560 [2024-11-19 10:20:55.262200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:41.560 [2024-11-19 10:20:55.262264] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:41.560 [2024-11-19 10:20:55.262285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:41.560 [2024-11-19 10:20:55.262404] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:41.560 [2024-11-19 10:20:55.262417] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:41.560 [2024-11-19 10:20:55.262641] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:41.560 [2024-11-19 10:20:55.262810] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:41.560 [2024-11-19 10:20:55.262826] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:41.560 [2024-11-19 10:20:55.262959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.560 pt3 00:09:41.560 10:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.560 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:41.560 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:41.560 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:41.560 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.560 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.560 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.560 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.561 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:41.561 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.561 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.561 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.561 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.561 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.561 10:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.561 10:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.561 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.561 10:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.561 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.561 "name": "raid_bdev1", 00:09:41.561 "uuid": "c33ca207-46f0-42da-b7d9-e5f00f70c98a", 00:09:41.561 "strip_size_kb": 0, 00:09:41.561 "state": "online", 00:09:41.561 "raid_level": "raid1", 00:09:41.561 "superblock": true, 00:09:41.561 "num_base_bdevs": 3, 00:09:41.561 "num_base_bdevs_discovered": 3, 00:09:41.561 "num_base_bdevs_operational": 3, 00:09:41.561 "base_bdevs_list": [ 00:09:41.561 { 00:09:41.561 "name": "pt1", 00:09:41.561 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:41.561 "is_configured": true, 00:09:41.561 "data_offset": 2048, 00:09:41.561 "data_size": 63488 00:09:41.561 }, 00:09:41.561 { 00:09:41.561 "name": "pt2", 00:09:41.561 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:41.561 "is_configured": true, 00:09:41.561 "data_offset": 2048, 00:09:41.561 "data_size": 63488 00:09:41.561 }, 00:09:41.561 { 00:09:41.561 "name": "pt3", 00:09:41.561 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:41.561 "is_configured": true, 00:09:41.561 "data_offset": 2048, 00:09:41.561 "data_size": 63488 00:09:41.561 } 00:09:41.561 ] 00:09:41.561 }' 00:09:41.561 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.561 10:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.129 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:42.129 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:42.129 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:42.129 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:42.129 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:42.129 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:42.129 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:42.129 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:42.129 10:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.129 10:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.129 [2024-11-19 10:20:55.737182] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.129 10:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.129 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:42.129 "name": "raid_bdev1", 00:09:42.129 "aliases": [ 00:09:42.129 "c33ca207-46f0-42da-b7d9-e5f00f70c98a" 00:09:42.129 ], 00:09:42.129 "product_name": "Raid Volume", 00:09:42.129 "block_size": 512, 00:09:42.129 "num_blocks": 63488, 00:09:42.129 "uuid": "c33ca207-46f0-42da-b7d9-e5f00f70c98a", 00:09:42.129 "assigned_rate_limits": { 00:09:42.129 "rw_ios_per_sec": 0, 00:09:42.129 "rw_mbytes_per_sec": 0, 00:09:42.129 "r_mbytes_per_sec": 0, 00:09:42.129 "w_mbytes_per_sec": 0 00:09:42.129 }, 00:09:42.129 "claimed": false, 00:09:42.129 "zoned": false, 00:09:42.129 "supported_io_types": { 00:09:42.129 "read": true, 00:09:42.129 "write": true, 00:09:42.129 "unmap": false, 00:09:42.129 "flush": false, 00:09:42.129 "reset": true, 00:09:42.129 "nvme_admin": false, 00:09:42.129 "nvme_io": false, 00:09:42.129 "nvme_io_md": false, 00:09:42.129 "write_zeroes": true, 00:09:42.129 "zcopy": false, 00:09:42.129 "get_zone_info": false, 00:09:42.129 "zone_management": false, 00:09:42.129 "zone_append": false, 00:09:42.129 "compare": false, 00:09:42.129 "compare_and_write": false, 00:09:42.129 "abort": false, 00:09:42.129 "seek_hole": false, 00:09:42.129 "seek_data": false, 00:09:42.129 "copy": false, 00:09:42.129 "nvme_iov_md": false 00:09:42.129 }, 00:09:42.129 "memory_domains": [ 00:09:42.129 { 00:09:42.129 "dma_device_id": "system", 00:09:42.129 "dma_device_type": 1 00:09:42.129 }, 00:09:42.129 { 00:09:42.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.129 "dma_device_type": 2 00:09:42.129 }, 00:09:42.129 { 00:09:42.129 "dma_device_id": "system", 00:09:42.129 "dma_device_type": 1 00:09:42.129 }, 00:09:42.129 { 00:09:42.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.129 "dma_device_type": 2 00:09:42.129 }, 00:09:42.129 { 00:09:42.129 "dma_device_id": "system", 00:09:42.129 "dma_device_type": 1 00:09:42.129 }, 00:09:42.129 { 00:09:42.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.129 "dma_device_type": 2 00:09:42.129 } 00:09:42.129 ], 00:09:42.129 "driver_specific": { 00:09:42.129 "raid": { 00:09:42.129 "uuid": "c33ca207-46f0-42da-b7d9-e5f00f70c98a", 00:09:42.129 "strip_size_kb": 0, 00:09:42.129 "state": "online", 00:09:42.129 "raid_level": "raid1", 00:09:42.129 "superblock": true, 00:09:42.129 "num_base_bdevs": 3, 00:09:42.129 "num_base_bdevs_discovered": 3, 00:09:42.130 "num_base_bdevs_operational": 3, 00:09:42.130 "base_bdevs_list": [ 00:09:42.130 { 00:09:42.130 "name": "pt1", 00:09:42.130 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:42.130 "is_configured": true, 00:09:42.130 "data_offset": 2048, 00:09:42.130 "data_size": 63488 00:09:42.130 }, 00:09:42.130 { 00:09:42.130 "name": "pt2", 00:09:42.130 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:42.130 "is_configured": true, 00:09:42.130 "data_offset": 2048, 00:09:42.130 "data_size": 63488 00:09:42.130 }, 00:09:42.130 { 00:09:42.130 "name": "pt3", 00:09:42.130 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:42.130 "is_configured": true, 00:09:42.130 "data_offset": 2048, 00:09:42.130 "data_size": 63488 00:09:42.130 } 00:09:42.130 ] 00:09:42.130 } 00:09:42.130 } 00:09:42.130 }' 00:09:42.130 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:42.130 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:42.130 pt2 00:09:42.130 pt3' 00:09:42.130 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.130 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:42.130 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.130 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.130 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:42.130 10:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.130 10:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.130 10:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.389 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.389 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.389 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.389 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:42.389 10:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.389 10:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.389 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.389 10:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.389 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.389 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.389 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:42.389 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:42.389 10:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.389 10:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.389 10:20:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:42.389 10:20:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.389 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:42.389 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:42.389 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:42.389 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:42.389 10:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.389 10:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.389 [2024-11-19 10:20:56.036572] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.389 10:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.389 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c33ca207-46f0-42da-b7d9-e5f00f70c98a '!=' c33ca207-46f0-42da-b7d9-e5f00f70c98a ']' 00:09:42.389 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:42.389 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:42.389 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:42.389 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:42.389 10:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.389 10:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.389 [2024-11-19 10:20:56.080292] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:42.389 10:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.389 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:42.389 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:42.389 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:42.389 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.389 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.389 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:42.389 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.389 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.389 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.389 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.389 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.389 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.389 10:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.389 10:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.389 10:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.389 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.389 "name": "raid_bdev1", 00:09:42.389 "uuid": "c33ca207-46f0-42da-b7d9-e5f00f70c98a", 00:09:42.389 "strip_size_kb": 0, 00:09:42.389 "state": "online", 00:09:42.389 "raid_level": "raid1", 00:09:42.389 "superblock": true, 00:09:42.389 "num_base_bdevs": 3, 00:09:42.389 "num_base_bdevs_discovered": 2, 00:09:42.389 "num_base_bdevs_operational": 2, 00:09:42.389 "base_bdevs_list": [ 00:09:42.389 { 00:09:42.389 "name": null, 00:09:42.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.389 "is_configured": false, 00:09:42.389 "data_offset": 0, 00:09:42.389 "data_size": 63488 00:09:42.390 }, 00:09:42.390 { 00:09:42.390 "name": "pt2", 00:09:42.390 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:42.390 "is_configured": true, 00:09:42.390 "data_offset": 2048, 00:09:42.390 "data_size": 63488 00:09:42.390 }, 00:09:42.390 { 00:09:42.390 "name": "pt3", 00:09:42.390 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:42.390 "is_configured": true, 00:09:42.390 "data_offset": 2048, 00:09:42.390 "data_size": 63488 00:09:42.390 } 00:09:42.390 ] 00:09:42.390 }' 00:09:42.390 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.390 10:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.957 [2024-11-19 10:20:56.515524] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:42.957 [2024-11-19 10:20:56.515602] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:42.957 [2024-11-19 10:20:56.515700] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.957 [2024-11-19 10:20:56.515773] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:42.957 [2024-11-19 10:20:56.515871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.957 [2024-11-19 10:20:56.599349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:42.957 [2024-11-19 10:20:56.599437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.957 [2024-11-19 10:20:56.599470] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:42.957 [2024-11-19 10:20:56.599498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.957 [2024-11-19 10:20:56.601581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.957 [2024-11-19 10:20:56.601659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:42.957 [2024-11-19 10:20:56.601749] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:42.957 [2024-11-19 10:20:56.601810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:42.957 pt2 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.957 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.957 "name": "raid_bdev1", 00:09:42.957 "uuid": "c33ca207-46f0-42da-b7d9-e5f00f70c98a", 00:09:42.957 "strip_size_kb": 0, 00:09:42.957 "state": "configuring", 00:09:42.957 "raid_level": "raid1", 00:09:42.957 "superblock": true, 00:09:42.957 "num_base_bdevs": 3, 00:09:42.957 "num_base_bdevs_discovered": 1, 00:09:42.957 "num_base_bdevs_operational": 2, 00:09:42.957 "base_bdevs_list": [ 00:09:42.957 { 00:09:42.957 "name": null, 00:09:42.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:42.958 "is_configured": false, 00:09:42.958 "data_offset": 2048, 00:09:42.958 "data_size": 63488 00:09:42.958 }, 00:09:42.958 { 00:09:42.958 "name": "pt2", 00:09:42.958 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:42.958 "is_configured": true, 00:09:42.958 "data_offset": 2048, 00:09:42.958 "data_size": 63488 00:09:42.958 }, 00:09:42.958 { 00:09:42.958 "name": null, 00:09:42.958 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:42.958 "is_configured": false, 00:09:42.958 "data_offset": 2048, 00:09:42.958 "data_size": 63488 00:09:42.958 } 00:09:42.958 ] 00:09:42.958 }' 00:09:42.958 10:20:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.958 10:20:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.526 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:43.526 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:43.526 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:43.526 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:43.526 10:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.526 10:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.526 [2024-11-19 10:20:57.070558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:43.526 [2024-11-19 10:20:57.070618] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.526 [2024-11-19 10:20:57.070638] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:43.526 [2024-11-19 10:20:57.070648] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.526 [2024-11-19 10:20:57.071069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.526 [2024-11-19 10:20:57.071089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:43.526 [2024-11-19 10:20:57.071177] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:43.526 [2024-11-19 10:20:57.071202] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:43.526 [2024-11-19 10:20:57.071325] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:43.526 [2024-11-19 10:20:57.071343] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:43.526 [2024-11-19 10:20:57.071594] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:43.526 [2024-11-19 10:20:57.071743] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:43.526 [2024-11-19 10:20:57.071751] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:43.526 [2024-11-19 10:20:57.071895] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.526 pt3 00:09:43.526 10:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.526 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:43.526 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.526 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.526 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.526 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.526 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:43.526 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.526 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.526 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.526 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.526 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.526 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.526 10:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.526 10:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.526 10:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.526 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.526 "name": "raid_bdev1", 00:09:43.526 "uuid": "c33ca207-46f0-42da-b7d9-e5f00f70c98a", 00:09:43.526 "strip_size_kb": 0, 00:09:43.526 "state": "online", 00:09:43.526 "raid_level": "raid1", 00:09:43.526 "superblock": true, 00:09:43.526 "num_base_bdevs": 3, 00:09:43.526 "num_base_bdevs_discovered": 2, 00:09:43.526 "num_base_bdevs_operational": 2, 00:09:43.526 "base_bdevs_list": [ 00:09:43.526 { 00:09:43.526 "name": null, 00:09:43.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.526 "is_configured": false, 00:09:43.526 "data_offset": 2048, 00:09:43.526 "data_size": 63488 00:09:43.526 }, 00:09:43.526 { 00:09:43.526 "name": "pt2", 00:09:43.526 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:43.526 "is_configured": true, 00:09:43.526 "data_offset": 2048, 00:09:43.526 "data_size": 63488 00:09:43.526 }, 00:09:43.526 { 00:09:43.526 "name": "pt3", 00:09:43.526 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:43.526 "is_configured": true, 00:09:43.526 "data_offset": 2048, 00:09:43.526 "data_size": 63488 00:09:43.526 } 00:09:43.526 ] 00:09:43.526 }' 00:09:43.526 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.526 10:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.786 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:43.786 10:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.786 10:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.786 [2024-11-19 10:20:57.501820] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:43.786 [2024-11-19 10:20:57.501899] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:43.786 [2024-11-19 10:20:57.502004] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:43.786 [2024-11-19 10:20:57.502098] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:43.786 [2024-11-19 10:20:57.502144] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:43.786 10:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.786 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.786 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:43.786 10:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.786 10:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.786 10:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.786 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:43.786 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:43.786 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:43.786 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:43.786 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:43.786 10:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.786 10:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.047 10:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.047 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:44.047 10:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.047 10:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.047 [2024-11-19 10:20:57.573698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:44.047 [2024-11-19 10:20:57.573747] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.047 [2024-11-19 10:20:57.573784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:44.047 [2024-11-19 10:20:57.573792] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.047 [2024-11-19 10:20:57.575870] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.047 [2024-11-19 10:20:57.575946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:44.047 [2024-11-19 10:20:57.576041] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:44.047 [2024-11-19 10:20:57.576090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:44.047 [2024-11-19 10:20:57.576214] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:44.047 [2024-11-19 10:20:57.576224] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:44.047 [2024-11-19 10:20:57.576239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:44.047 [2024-11-19 10:20:57.576286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:44.047 pt1 00:09:44.047 10:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.047 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:44.047 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:44.047 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.047 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.047 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.047 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.047 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:44.047 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.047 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.047 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.047 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.047 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.047 10:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.047 10:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.047 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.047 10:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.047 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.047 "name": "raid_bdev1", 00:09:44.047 "uuid": "c33ca207-46f0-42da-b7d9-e5f00f70c98a", 00:09:44.047 "strip_size_kb": 0, 00:09:44.047 "state": "configuring", 00:09:44.047 "raid_level": "raid1", 00:09:44.047 "superblock": true, 00:09:44.047 "num_base_bdevs": 3, 00:09:44.047 "num_base_bdevs_discovered": 1, 00:09:44.047 "num_base_bdevs_operational": 2, 00:09:44.047 "base_bdevs_list": [ 00:09:44.047 { 00:09:44.047 "name": null, 00:09:44.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.047 "is_configured": false, 00:09:44.047 "data_offset": 2048, 00:09:44.047 "data_size": 63488 00:09:44.047 }, 00:09:44.047 { 00:09:44.047 "name": "pt2", 00:09:44.047 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.047 "is_configured": true, 00:09:44.047 "data_offset": 2048, 00:09:44.047 "data_size": 63488 00:09:44.047 }, 00:09:44.047 { 00:09:44.047 "name": null, 00:09:44.047 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.047 "is_configured": false, 00:09:44.047 "data_offset": 2048, 00:09:44.047 "data_size": 63488 00:09:44.047 } 00:09:44.047 ] 00:09:44.047 }' 00:09:44.047 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.047 10:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.309 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:44.309 10:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.309 10:20:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.309 10:20:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:44.309 10:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.309 10:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:44.309 10:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:44.309 10:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.309 10:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.309 [2024-11-19 10:20:58.040904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:44.309 [2024-11-19 10:20:58.041018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.309 [2024-11-19 10:20:58.041057] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:44.309 [2024-11-19 10:20:58.041084] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.309 [2024-11-19 10:20:58.041558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.309 [2024-11-19 10:20:58.041615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:44.309 [2024-11-19 10:20:58.041717] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:44.309 [2024-11-19 10:20:58.041790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:44.309 [2024-11-19 10:20:58.041952] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:44.309 [2024-11-19 10:20:58.041990] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:44.309 [2024-11-19 10:20:58.042271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:44.309 [2024-11-19 10:20:58.042462] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:44.309 [2024-11-19 10:20:58.042507] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:44.309 [2024-11-19 10:20:58.042679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.309 pt3 00:09:44.309 10:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.309 10:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:44.309 10:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.309 10:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.309 10:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:44.309 10:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:44.309 10:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:44.309 10:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.309 10:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.309 10:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.309 10:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.309 10:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.309 10:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.309 10:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.309 10:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.309 10:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.568 10:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.568 "name": "raid_bdev1", 00:09:44.568 "uuid": "c33ca207-46f0-42da-b7d9-e5f00f70c98a", 00:09:44.568 "strip_size_kb": 0, 00:09:44.568 "state": "online", 00:09:44.568 "raid_level": "raid1", 00:09:44.568 "superblock": true, 00:09:44.568 "num_base_bdevs": 3, 00:09:44.568 "num_base_bdevs_discovered": 2, 00:09:44.568 "num_base_bdevs_operational": 2, 00:09:44.568 "base_bdevs_list": [ 00:09:44.568 { 00:09:44.568 "name": null, 00:09:44.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.568 "is_configured": false, 00:09:44.568 "data_offset": 2048, 00:09:44.568 "data_size": 63488 00:09:44.568 }, 00:09:44.568 { 00:09:44.568 "name": "pt2", 00:09:44.568 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.568 "is_configured": true, 00:09:44.568 "data_offset": 2048, 00:09:44.568 "data_size": 63488 00:09:44.568 }, 00:09:44.568 { 00:09:44.568 "name": "pt3", 00:09:44.568 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.568 "is_configured": true, 00:09:44.568 "data_offset": 2048, 00:09:44.568 "data_size": 63488 00:09:44.568 } 00:09:44.568 ] 00:09:44.568 }' 00:09:44.568 10:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.568 10:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.827 10:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:44.827 10:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.827 10:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.827 10:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:44.827 10:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.827 10:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:44.827 10:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:44.827 10:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.827 10:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.827 10:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:44.827 [2024-11-19 10:20:58.528333] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:44.827 10:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.827 10:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' c33ca207-46f0-42da-b7d9-e5f00f70c98a '!=' c33ca207-46f0-42da-b7d9-e5f00f70c98a ']' 00:09:44.827 10:20:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68454 00:09:44.827 10:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68454 ']' 00:09:44.827 10:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68454 00:09:44.827 10:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:44.827 10:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.827 10:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68454 00:09:44.827 killing process with pid 68454 00:09:44.827 10:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.827 10:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.827 10:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68454' 00:09:44.827 10:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68454 00:09:44.827 [2024-11-19 10:20:58.606153] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:44.828 [2024-11-19 10:20:58.606242] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.828 [2024-11-19 10:20:58.606297] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:44.828 [2024-11-19 10:20:58.606308] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:44.828 10:20:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68454 00:09:45.396 [2024-11-19 10:20:58.899949] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:46.336 10:20:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:46.336 00:09:46.336 real 0m7.617s 00:09:46.336 user 0m11.978s 00:09:46.336 sys 0m1.298s 00:09:46.336 10:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.336 10:20:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.336 ************************************ 00:09:46.336 END TEST raid_superblock_test 00:09:46.336 ************************************ 00:09:46.336 10:21:00 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:46.336 10:21:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:46.336 10:21:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.336 10:21:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:46.336 ************************************ 00:09:46.336 START TEST raid_read_error_test 00:09:46.336 ************************************ 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7NXYc2odUe 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68894 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68894 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 68894 ']' 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.336 10:21:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.595 [2024-11-19 10:21:00.127717] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:09:46.595 [2024-11-19 10:21:00.127914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68894 ] 00:09:46.595 [2024-11-19 10:21:00.292582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.854 [2024-11-19 10:21:00.400935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.854 [2024-11-19 10:21:00.593777] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.854 [2024-11-19 10:21:00.593903] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.422 10:21:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.422 10:21:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:47.422 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:47.422 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:47.422 10:21:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.422 10:21:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.422 BaseBdev1_malloc 00:09:47.422 10:21:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.422 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:47.422 10:21:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.422 10:21:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.422 true 00:09:47.422 10:21:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.422 10:21:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:47.422 10:21:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.422 10:21:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.422 [2024-11-19 10:21:00.999669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:47.422 [2024-11-19 10:21:00.999727] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.422 [2024-11-19 10:21:00.999745] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:47.422 [2024-11-19 10:21:00.999755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.422 [2024-11-19 10:21:01.001785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.422 [2024-11-19 10:21:01.001823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:47.422 BaseBdev1 00:09:47.422 10:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.422 10:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:47.422 10:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:47.422 10:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.422 10:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.422 BaseBdev2_malloc 00:09:47.422 10:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.422 10:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:47.422 10:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.422 10:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.422 true 00:09:47.422 10:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.422 10:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:47.422 10:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.422 10:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.422 [2024-11-19 10:21:01.066910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:47.422 [2024-11-19 10:21:01.066964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.422 [2024-11-19 10:21:01.066980] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:47.422 [2024-11-19 10:21:01.066990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.422 [2024-11-19 10:21:01.068985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.422 [2024-11-19 10:21:01.069071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:47.422 BaseBdev2 00:09:47.422 10:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.422 10:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:47.422 10:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:47.422 10:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.422 10:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.422 BaseBdev3_malloc 00:09:47.422 10:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.422 10:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:47.422 10:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.422 10:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.422 true 00:09:47.422 10:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.422 10:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:47.422 10:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.422 10:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.422 [2024-11-19 10:21:01.142049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:47.422 [2024-11-19 10:21:01.142098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.422 [2024-11-19 10:21:01.142113] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:47.422 [2024-11-19 10:21:01.142122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.422 [2024-11-19 10:21:01.144165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.422 [2024-11-19 10:21:01.144253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:47.422 BaseBdev3 00:09:47.422 10:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.422 10:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:47.422 10:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.422 10:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.422 [2024-11-19 10:21:01.154122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:47.422 [2024-11-19 10:21:01.155794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:47.422 [2024-11-19 10:21:01.155858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:47.422 [2024-11-19 10:21:01.156048] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:47.422 [2024-11-19 10:21:01.156061] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:47.423 [2024-11-19 10:21:01.156283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:47.423 [2024-11-19 10:21:01.156471] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:47.423 [2024-11-19 10:21:01.156483] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:47.423 [2024-11-19 10:21:01.156627] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.423 10:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.423 10:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:47.423 10:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:47.423 10:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:47.423 10:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:47.423 10:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:47.423 10:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:47.423 10:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.423 10:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.423 10:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.423 10:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.423 10:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.423 10:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.423 10:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.423 10:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.423 10:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.682 10:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.682 "name": "raid_bdev1", 00:09:47.682 "uuid": "7c030bcc-8665-460f-a8aa-c05a9c50de7a", 00:09:47.682 "strip_size_kb": 0, 00:09:47.682 "state": "online", 00:09:47.682 "raid_level": "raid1", 00:09:47.682 "superblock": true, 00:09:47.682 "num_base_bdevs": 3, 00:09:47.682 "num_base_bdevs_discovered": 3, 00:09:47.682 "num_base_bdevs_operational": 3, 00:09:47.682 "base_bdevs_list": [ 00:09:47.682 { 00:09:47.682 "name": "BaseBdev1", 00:09:47.682 "uuid": "df40d896-2f22-5b72-bbd5-69daf2226e08", 00:09:47.682 "is_configured": true, 00:09:47.682 "data_offset": 2048, 00:09:47.682 "data_size": 63488 00:09:47.682 }, 00:09:47.682 { 00:09:47.682 "name": "BaseBdev2", 00:09:47.682 "uuid": "422f121b-9abe-5119-9632-18c2cd1be733", 00:09:47.682 "is_configured": true, 00:09:47.682 "data_offset": 2048, 00:09:47.682 "data_size": 63488 00:09:47.682 }, 00:09:47.682 { 00:09:47.682 "name": "BaseBdev3", 00:09:47.682 "uuid": "ff0d4c28-44c3-57a5-9b81-582d890cca09", 00:09:47.682 "is_configured": true, 00:09:47.682 "data_offset": 2048, 00:09:47.682 "data_size": 63488 00:09:47.682 } 00:09:47.682 ] 00:09:47.682 }' 00:09:47.682 10:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.682 10:21:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.940 10:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:47.940 10:21:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:47.940 [2024-11-19 10:21:01.666490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:48.878 10:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:48.878 10:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.878 10:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.878 10:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.878 10:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:48.878 10:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:48.878 10:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:48.878 10:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:48.878 10:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:48.878 10:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.878 10:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.878 10:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.878 10:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.878 10:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.878 10:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.878 10:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.878 10:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.878 10:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.878 10:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.878 10:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.878 10:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.879 10:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.879 10:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.138 10:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.138 "name": "raid_bdev1", 00:09:49.138 "uuid": "7c030bcc-8665-460f-a8aa-c05a9c50de7a", 00:09:49.138 "strip_size_kb": 0, 00:09:49.138 "state": "online", 00:09:49.138 "raid_level": "raid1", 00:09:49.138 "superblock": true, 00:09:49.138 "num_base_bdevs": 3, 00:09:49.138 "num_base_bdevs_discovered": 3, 00:09:49.138 "num_base_bdevs_operational": 3, 00:09:49.138 "base_bdevs_list": [ 00:09:49.138 { 00:09:49.138 "name": "BaseBdev1", 00:09:49.138 "uuid": "df40d896-2f22-5b72-bbd5-69daf2226e08", 00:09:49.138 "is_configured": true, 00:09:49.138 "data_offset": 2048, 00:09:49.138 "data_size": 63488 00:09:49.138 }, 00:09:49.138 { 00:09:49.138 "name": "BaseBdev2", 00:09:49.138 "uuid": "422f121b-9abe-5119-9632-18c2cd1be733", 00:09:49.138 "is_configured": true, 00:09:49.138 "data_offset": 2048, 00:09:49.138 "data_size": 63488 00:09:49.138 }, 00:09:49.138 { 00:09:49.138 "name": "BaseBdev3", 00:09:49.138 "uuid": "ff0d4c28-44c3-57a5-9b81-582d890cca09", 00:09:49.138 "is_configured": true, 00:09:49.138 "data_offset": 2048, 00:09:49.138 "data_size": 63488 00:09:49.138 } 00:09:49.138 ] 00:09:49.138 }' 00:09:49.138 10:21:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.138 10:21:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.398 10:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:49.398 10:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.398 10:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.398 [2024-11-19 10:21:03.044978] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:49.398 [2024-11-19 10:21:03.045023] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:49.398 [2024-11-19 10:21:03.047799] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:49.398 [2024-11-19 10:21:03.047882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.398 [2024-11-19 10:21:03.048008] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:49.398 [2024-11-19 10:21:03.048075] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:49.398 { 00:09:49.398 "results": [ 00:09:49.398 { 00:09:49.398 "job": "raid_bdev1", 00:09:49.398 "core_mask": "0x1", 00:09:49.398 "workload": "randrw", 00:09:49.398 "percentage": 50, 00:09:49.398 "status": "finished", 00:09:49.398 "queue_depth": 1, 00:09:49.398 "io_size": 131072, 00:09:49.398 "runtime": 1.379363, 00:09:49.398 "iops": 14102.161649979013, 00:09:49.398 "mibps": 1762.7702062473766, 00:09:49.398 "io_failed": 0, 00:09:49.398 "io_timeout": 0, 00:09:49.398 "avg_latency_us": 68.44403018245787, 00:09:49.398 "min_latency_us": 22.246288209606988, 00:09:49.398 "max_latency_us": 1345.0620087336245 00:09:49.398 } 00:09:49.398 ], 00:09:49.398 "core_count": 1 00:09:49.398 } 00:09:49.398 10:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.398 10:21:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68894 00:09:49.398 10:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 68894 ']' 00:09:49.398 10:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 68894 00:09:49.398 10:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:49.398 10:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:49.398 10:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68894 00:09:49.398 killing process with pid 68894 00:09:49.398 10:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:49.398 10:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:49.398 10:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68894' 00:09:49.398 10:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 68894 00:09:49.398 [2024-11-19 10:21:03.094619] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:49.398 10:21:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 68894 00:09:49.657 [2024-11-19 10:21:03.316598] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:51.037 10:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:51.037 10:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7NXYc2odUe 00:09:51.037 10:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:51.037 10:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:51.037 10:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:51.037 10:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:51.037 10:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:51.037 ************************************ 00:09:51.037 END TEST raid_read_error_test 00:09:51.037 ************************************ 00:09:51.037 10:21:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:51.037 00:09:51.037 real 0m4.403s 00:09:51.037 user 0m5.193s 00:09:51.037 sys 0m0.549s 00:09:51.037 10:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.037 10:21:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.037 10:21:04 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:51.037 10:21:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:51.037 10:21:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.037 10:21:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:51.037 ************************************ 00:09:51.037 START TEST raid_write_error_test 00:09:51.037 ************************************ 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.lTh3RhaUQv 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69034 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69034 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69034 ']' 00:09:51.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.037 10:21:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.037 [2024-11-19 10:21:04.596740] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:09:51.037 [2024-11-19 10:21:04.596848] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69034 ] 00:09:51.037 [2024-11-19 10:21:04.768366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.296 [2024-11-19 10:21:04.877129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.296 [2024-11-19 10:21:05.071815] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:51.296 [2024-11-19 10:21:05.071881] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.865 BaseBdev1_malloc 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.865 true 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.865 [2024-11-19 10:21:05.475340] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:51.865 [2024-11-19 10:21:05.475397] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.865 [2024-11-19 10:21:05.475416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:51.865 [2024-11-19 10:21:05.475426] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.865 [2024-11-19 10:21:05.477422] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.865 [2024-11-19 10:21:05.477520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:51.865 BaseBdev1 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.865 BaseBdev2_malloc 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.865 true 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.865 [2024-11-19 10:21:05.539635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:51.865 [2024-11-19 10:21:05.539690] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.865 [2024-11-19 10:21:05.539720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:51.865 [2024-11-19 10:21:05.539731] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.865 [2024-11-19 10:21:05.541799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.865 [2024-11-19 10:21:05.541837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:51.865 BaseBdev2 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.865 BaseBdev3_malloc 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.865 true 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.865 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.865 [2024-11-19 10:21:05.618864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:51.865 [2024-11-19 10:21:05.618917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.865 [2024-11-19 10:21:05.618933] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:51.866 [2024-11-19 10:21:05.618943] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.866 [2024-11-19 10:21:05.621104] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.866 [2024-11-19 10:21:05.621141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:51.866 BaseBdev3 00:09:51.866 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.866 10:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:51.866 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.866 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.866 [2024-11-19 10:21:05.630908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:51.866 [2024-11-19 10:21:05.632685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:51.866 [2024-11-19 10:21:05.632796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:51.866 [2024-11-19 10:21:05.633034] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:51.866 [2024-11-19 10:21:05.633078] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:51.866 [2024-11-19 10:21:05.633331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:51.866 [2024-11-19 10:21:05.633532] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:51.866 [2024-11-19 10:21:05.633577] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:51.866 [2024-11-19 10:21:05.633754] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.866 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.866 10:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:51.866 10:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.866 10:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:51.866 10:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.866 10:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.866 10:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.866 10:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.866 10:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.866 10:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.866 10:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.866 10:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.866 10:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.866 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.866 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.125 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.125 10:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.125 "name": "raid_bdev1", 00:09:52.125 "uuid": "13e5ba79-40f0-4af9-96c2-9b772da6d919", 00:09:52.125 "strip_size_kb": 0, 00:09:52.125 "state": "online", 00:09:52.125 "raid_level": "raid1", 00:09:52.125 "superblock": true, 00:09:52.125 "num_base_bdevs": 3, 00:09:52.125 "num_base_bdevs_discovered": 3, 00:09:52.125 "num_base_bdevs_operational": 3, 00:09:52.125 "base_bdevs_list": [ 00:09:52.125 { 00:09:52.125 "name": "BaseBdev1", 00:09:52.125 "uuid": "0fd6ed6f-ef17-59be-b7a2-985f3e03c06f", 00:09:52.125 "is_configured": true, 00:09:52.125 "data_offset": 2048, 00:09:52.125 "data_size": 63488 00:09:52.125 }, 00:09:52.125 { 00:09:52.125 "name": "BaseBdev2", 00:09:52.125 "uuid": "e124eefa-8736-552d-86a3-0687e2634b8a", 00:09:52.125 "is_configured": true, 00:09:52.125 "data_offset": 2048, 00:09:52.125 "data_size": 63488 00:09:52.125 }, 00:09:52.125 { 00:09:52.125 "name": "BaseBdev3", 00:09:52.125 "uuid": "b8d01f4d-9009-587c-8bd9-0baddbe7ffb3", 00:09:52.125 "is_configured": true, 00:09:52.125 "data_offset": 2048, 00:09:52.125 "data_size": 63488 00:09:52.125 } 00:09:52.125 ] 00:09:52.125 }' 00:09:52.125 10:21:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.125 10:21:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.383 10:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:52.384 10:21:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:52.384 [2024-11-19 10:21:06.139341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:53.335 10:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:53.335 10:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.335 10:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.335 [2024-11-19 10:21:07.057879] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:53.335 [2024-11-19 10:21:07.057938] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:53.335 [2024-11-19 10:21:07.058149] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:09:53.335 10:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.335 10:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:53.335 10:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:53.335 10:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:53.335 10:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:53.335 10:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:53.335 10:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:53.335 10:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.335 10:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.335 10:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.335 10:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:53.335 10:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.335 10:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.335 10:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.335 10:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.335 10:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.335 10:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.335 10:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.335 10:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.335 10:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.594 10:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.594 "name": "raid_bdev1", 00:09:53.594 "uuid": "13e5ba79-40f0-4af9-96c2-9b772da6d919", 00:09:53.594 "strip_size_kb": 0, 00:09:53.594 "state": "online", 00:09:53.594 "raid_level": "raid1", 00:09:53.594 "superblock": true, 00:09:53.594 "num_base_bdevs": 3, 00:09:53.594 "num_base_bdevs_discovered": 2, 00:09:53.594 "num_base_bdevs_operational": 2, 00:09:53.594 "base_bdevs_list": [ 00:09:53.594 { 00:09:53.594 "name": null, 00:09:53.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.594 "is_configured": false, 00:09:53.594 "data_offset": 0, 00:09:53.594 "data_size": 63488 00:09:53.594 }, 00:09:53.594 { 00:09:53.594 "name": "BaseBdev2", 00:09:53.594 "uuid": "e124eefa-8736-552d-86a3-0687e2634b8a", 00:09:53.594 "is_configured": true, 00:09:53.594 "data_offset": 2048, 00:09:53.594 "data_size": 63488 00:09:53.594 }, 00:09:53.594 { 00:09:53.594 "name": "BaseBdev3", 00:09:53.594 "uuid": "b8d01f4d-9009-587c-8bd9-0baddbe7ffb3", 00:09:53.594 "is_configured": true, 00:09:53.594 "data_offset": 2048, 00:09:53.594 "data_size": 63488 00:09:53.594 } 00:09:53.594 ] 00:09:53.594 }' 00:09:53.594 10:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.594 10:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.854 10:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:53.854 10:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.854 10:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.854 [2024-11-19 10:21:07.495898] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:53.854 [2024-11-19 10:21:07.496011] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:53.854 [2024-11-19 10:21:07.498665] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:53.854 [2024-11-19 10:21:07.498767] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:53.854 [2024-11-19 10:21:07.498866] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:53.854 [2024-11-19 10:21:07.498912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:53.854 { 00:09:53.854 "results": [ 00:09:53.854 { 00:09:53.854 "job": "raid_bdev1", 00:09:53.854 "core_mask": "0x1", 00:09:53.854 "workload": "randrw", 00:09:53.854 "percentage": 50, 00:09:53.854 "status": "finished", 00:09:53.854 "queue_depth": 1, 00:09:53.854 "io_size": 131072, 00:09:53.854 "runtime": 1.357455, 00:09:53.854 "iops": 15629.984051036683, 00:09:53.854 "mibps": 1953.7480063795854, 00:09:53.854 "io_failed": 0, 00:09:53.854 "io_timeout": 0, 00:09:53.854 "avg_latency_us": 61.49684254592748, 00:09:53.854 "min_latency_us": 22.246288209606988, 00:09:53.854 "max_latency_us": 1595.4724890829693 00:09:53.854 } 00:09:53.854 ], 00:09:53.854 "core_count": 1 00:09:53.854 } 00:09:53.854 10:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.854 10:21:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69034 00:09:53.854 10:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69034 ']' 00:09:53.854 10:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69034 00:09:53.854 10:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:53.854 10:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:53.854 10:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69034 00:09:53.854 killing process with pid 69034 00:09:53.854 10:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:53.854 10:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:53.854 10:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69034' 00:09:53.854 10:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69034 00:09:53.854 [2024-11-19 10:21:07.540217] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:53.854 10:21:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69034 00:09:54.113 [2024-11-19 10:21:07.763348] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:55.492 10:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:55.492 10:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.lTh3RhaUQv 00:09:55.492 10:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:55.492 ************************************ 00:09:55.492 END TEST raid_write_error_test 00:09:55.492 ************************************ 00:09:55.492 10:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:55.492 10:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:55.492 10:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:55.492 10:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:55.492 10:21:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:55.492 00:09:55.492 real 0m4.381s 00:09:55.492 user 0m5.184s 00:09:55.492 sys 0m0.546s 00:09:55.492 10:21:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.492 10:21:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.492 10:21:08 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:55.492 10:21:08 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:55.492 10:21:08 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:09:55.492 10:21:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:55.492 10:21:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.492 10:21:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:55.492 ************************************ 00:09:55.492 START TEST raid_state_function_test 00:09:55.492 ************************************ 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:55.492 Process raid pid: 69178 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69178 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69178' 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69178 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69178 ']' 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.492 10:21:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.492 [2024-11-19 10:21:09.037482] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:09:55.492 [2024-11-19 10:21:09.037691] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.492 [2024-11-19 10:21:09.208444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.752 [2024-11-19 10:21:09.319029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.752 [2024-11-19 10:21:09.513839] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.752 [2024-11-19 10:21:09.513966] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.321 10:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.321 10:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:56.321 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:56.321 10:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.321 10:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.321 [2024-11-19 10:21:09.855481] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:56.321 [2024-11-19 10:21:09.855541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:56.321 [2024-11-19 10:21:09.855552] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:56.321 [2024-11-19 10:21:09.855561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:56.321 [2024-11-19 10:21:09.855567] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:56.321 [2024-11-19 10:21:09.855575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:56.321 [2024-11-19 10:21:09.855582] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:56.321 [2024-11-19 10:21:09.855590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:56.321 10:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.321 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:56.321 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.321 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.321 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:56.321 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.321 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:56.321 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.321 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.321 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.321 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.321 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.321 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.321 10:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.321 10:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.321 10:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.321 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.321 "name": "Existed_Raid", 00:09:56.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.321 "strip_size_kb": 64, 00:09:56.321 "state": "configuring", 00:09:56.321 "raid_level": "raid0", 00:09:56.321 "superblock": false, 00:09:56.321 "num_base_bdevs": 4, 00:09:56.321 "num_base_bdevs_discovered": 0, 00:09:56.321 "num_base_bdevs_operational": 4, 00:09:56.321 "base_bdevs_list": [ 00:09:56.321 { 00:09:56.321 "name": "BaseBdev1", 00:09:56.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.321 "is_configured": false, 00:09:56.321 "data_offset": 0, 00:09:56.321 "data_size": 0 00:09:56.321 }, 00:09:56.321 { 00:09:56.321 "name": "BaseBdev2", 00:09:56.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.321 "is_configured": false, 00:09:56.321 "data_offset": 0, 00:09:56.321 "data_size": 0 00:09:56.321 }, 00:09:56.321 { 00:09:56.321 "name": "BaseBdev3", 00:09:56.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.321 "is_configured": false, 00:09:56.321 "data_offset": 0, 00:09:56.321 "data_size": 0 00:09:56.321 }, 00:09:56.321 { 00:09:56.321 "name": "BaseBdev4", 00:09:56.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.321 "is_configured": false, 00:09:56.321 "data_offset": 0, 00:09:56.321 "data_size": 0 00:09:56.321 } 00:09:56.321 ] 00:09:56.321 }' 00:09:56.321 10:21:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.321 10:21:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.581 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:56.581 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.581 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.581 [2024-11-19 10:21:10.286686] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:56.581 [2024-11-19 10:21:10.286780] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:56.581 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.581 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:56.581 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.581 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.581 [2024-11-19 10:21:10.298664] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:56.581 [2024-11-19 10:21:10.298746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:56.581 [2024-11-19 10:21:10.298773] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:56.581 [2024-11-19 10:21:10.298794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:56.581 [2024-11-19 10:21:10.298812] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:56.581 [2024-11-19 10:21:10.298832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:56.581 [2024-11-19 10:21:10.298849] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:56.581 [2024-11-19 10:21:10.298869] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:56.581 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.581 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:56.581 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.581 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.581 [2024-11-19 10:21:10.344181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:56.581 BaseBdev1 00:09:56.581 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.581 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:56.581 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:56.581 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:56.581 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:56.581 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:56.581 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:56.581 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:56.581 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.581 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.581 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.581 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:56.581 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.581 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.841 [ 00:09:56.841 { 00:09:56.841 "name": "BaseBdev1", 00:09:56.841 "aliases": [ 00:09:56.841 "c906af72-6149-4b11-9201-b16aea6531b6" 00:09:56.841 ], 00:09:56.841 "product_name": "Malloc disk", 00:09:56.841 "block_size": 512, 00:09:56.841 "num_blocks": 65536, 00:09:56.841 "uuid": "c906af72-6149-4b11-9201-b16aea6531b6", 00:09:56.841 "assigned_rate_limits": { 00:09:56.841 "rw_ios_per_sec": 0, 00:09:56.841 "rw_mbytes_per_sec": 0, 00:09:56.841 "r_mbytes_per_sec": 0, 00:09:56.841 "w_mbytes_per_sec": 0 00:09:56.841 }, 00:09:56.841 "claimed": true, 00:09:56.841 "claim_type": "exclusive_write", 00:09:56.841 "zoned": false, 00:09:56.841 "supported_io_types": { 00:09:56.841 "read": true, 00:09:56.841 "write": true, 00:09:56.841 "unmap": true, 00:09:56.841 "flush": true, 00:09:56.841 "reset": true, 00:09:56.841 "nvme_admin": false, 00:09:56.841 "nvme_io": false, 00:09:56.841 "nvme_io_md": false, 00:09:56.841 "write_zeroes": true, 00:09:56.841 "zcopy": true, 00:09:56.841 "get_zone_info": false, 00:09:56.841 "zone_management": false, 00:09:56.841 "zone_append": false, 00:09:56.841 "compare": false, 00:09:56.841 "compare_and_write": false, 00:09:56.841 "abort": true, 00:09:56.841 "seek_hole": false, 00:09:56.841 "seek_data": false, 00:09:56.841 "copy": true, 00:09:56.841 "nvme_iov_md": false 00:09:56.841 }, 00:09:56.841 "memory_domains": [ 00:09:56.841 { 00:09:56.841 "dma_device_id": "system", 00:09:56.841 "dma_device_type": 1 00:09:56.841 }, 00:09:56.841 { 00:09:56.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.841 "dma_device_type": 2 00:09:56.841 } 00:09:56.841 ], 00:09:56.841 "driver_specific": {} 00:09:56.841 } 00:09:56.841 ] 00:09:56.841 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.841 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:56.841 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:56.841 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.841 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.841 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:56.841 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.841 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:56.841 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.841 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.841 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.841 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.841 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.841 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.841 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.841 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.841 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.841 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.841 "name": "Existed_Raid", 00:09:56.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.841 "strip_size_kb": 64, 00:09:56.841 "state": "configuring", 00:09:56.841 "raid_level": "raid0", 00:09:56.841 "superblock": false, 00:09:56.841 "num_base_bdevs": 4, 00:09:56.841 "num_base_bdevs_discovered": 1, 00:09:56.841 "num_base_bdevs_operational": 4, 00:09:56.841 "base_bdevs_list": [ 00:09:56.841 { 00:09:56.841 "name": "BaseBdev1", 00:09:56.841 "uuid": "c906af72-6149-4b11-9201-b16aea6531b6", 00:09:56.841 "is_configured": true, 00:09:56.841 "data_offset": 0, 00:09:56.841 "data_size": 65536 00:09:56.841 }, 00:09:56.841 { 00:09:56.841 "name": "BaseBdev2", 00:09:56.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.841 "is_configured": false, 00:09:56.841 "data_offset": 0, 00:09:56.841 "data_size": 0 00:09:56.841 }, 00:09:56.841 { 00:09:56.841 "name": "BaseBdev3", 00:09:56.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.841 "is_configured": false, 00:09:56.841 "data_offset": 0, 00:09:56.841 "data_size": 0 00:09:56.841 }, 00:09:56.841 { 00:09:56.841 "name": "BaseBdev4", 00:09:56.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.841 "is_configured": false, 00:09:56.841 "data_offset": 0, 00:09:56.841 "data_size": 0 00:09:56.841 } 00:09:56.841 ] 00:09:56.841 }' 00:09:56.841 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.841 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.101 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:57.101 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.101 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.101 [2024-11-19 10:21:10.795441] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:57.101 [2024-11-19 10:21:10.795540] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:57.101 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.101 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:57.101 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.101 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.101 [2024-11-19 10:21:10.807473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:57.101 [2024-11-19 10:21:10.809289] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:57.101 [2024-11-19 10:21:10.809368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:57.101 [2024-11-19 10:21:10.809398] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:57.101 [2024-11-19 10:21:10.809423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:57.101 [2024-11-19 10:21:10.809442] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:57.101 [2024-11-19 10:21:10.809463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:57.101 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.101 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:57.101 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:57.101 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:57.101 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.101 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.101 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.101 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.101 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.101 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.101 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.101 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.101 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.101 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.101 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.101 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.101 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.101 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.101 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.101 "name": "Existed_Raid", 00:09:57.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.101 "strip_size_kb": 64, 00:09:57.101 "state": "configuring", 00:09:57.101 "raid_level": "raid0", 00:09:57.101 "superblock": false, 00:09:57.101 "num_base_bdevs": 4, 00:09:57.101 "num_base_bdevs_discovered": 1, 00:09:57.101 "num_base_bdevs_operational": 4, 00:09:57.101 "base_bdevs_list": [ 00:09:57.101 { 00:09:57.101 "name": "BaseBdev1", 00:09:57.101 "uuid": "c906af72-6149-4b11-9201-b16aea6531b6", 00:09:57.101 "is_configured": true, 00:09:57.101 "data_offset": 0, 00:09:57.101 "data_size": 65536 00:09:57.101 }, 00:09:57.101 { 00:09:57.101 "name": "BaseBdev2", 00:09:57.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.101 "is_configured": false, 00:09:57.101 "data_offset": 0, 00:09:57.101 "data_size": 0 00:09:57.101 }, 00:09:57.101 { 00:09:57.101 "name": "BaseBdev3", 00:09:57.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.101 "is_configured": false, 00:09:57.101 "data_offset": 0, 00:09:57.101 "data_size": 0 00:09:57.101 }, 00:09:57.101 { 00:09:57.101 "name": "BaseBdev4", 00:09:57.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.101 "is_configured": false, 00:09:57.101 "data_offset": 0, 00:09:57.101 "data_size": 0 00:09:57.101 } 00:09:57.101 ] 00:09:57.101 }' 00:09:57.101 10:21:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.101 10:21:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.766 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:57.766 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.766 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.766 [2024-11-19 10:21:11.287423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:57.766 BaseBdev2 00:09:57.766 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.766 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:57.766 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:57.766 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.766 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:57.766 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.766 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.766 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:57.766 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.766 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.766 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.766 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:57.766 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.766 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.766 [ 00:09:57.766 { 00:09:57.766 "name": "BaseBdev2", 00:09:57.766 "aliases": [ 00:09:57.766 "00b4ba25-ee46-4945-8055-beae5355ef6f" 00:09:57.766 ], 00:09:57.766 "product_name": "Malloc disk", 00:09:57.766 "block_size": 512, 00:09:57.766 "num_blocks": 65536, 00:09:57.766 "uuid": "00b4ba25-ee46-4945-8055-beae5355ef6f", 00:09:57.766 "assigned_rate_limits": { 00:09:57.766 "rw_ios_per_sec": 0, 00:09:57.766 "rw_mbytes_per_sec": 0, 00:09:57.766 "r_mbytes_per_sec": 0, 00:09:57.766 "w_mbytes_per_sec": 0 00:09:57.766 }, 00:09:57.766 "claimed": true, 00:09:57.766 "claim_type": "exclusive_write", 00:09:57.766 "zoned": false, 00:09:57.766 "supported_io_types": { 00:09:57.766 "read": true, 00:09:57.766 "write": true, 00:09:57.766 "unmap": true, 00:09:57.766 "flush": true, 00:09:57.766 "reset": true, 00:09:57.766 "nvme_admin": false, 00:09:57.766 "nvme_io": false, 00:09:57.766 "nvme_io_md": false, 00:09:57.766 "write_zeroes": true, 00:09:57.766 "zcopy": true, 00:09:57.766 "get_zone_info": false, 00:09:57.766 "zone_management": false, 00:09:57.766 "zone_append": false, 00:09:57.766 "compare": false, 00:09:57.766 "compare_and_write": false, 00:09:57.766 "abort": true, 00:09:57.766 "seek_hole": false, 00:09:57.766 "seek_data": false, 00:09:57.766 "copy": true, 00:09:57.766 "nvme_iov_md": false 00:09:57.766 }, 00:09:57.766 "memory_domains": [ 00:09:57.766 { 00:09:57.766 "dma_device_id": "system", 00:09:57.766 "dma_device_type": 1 00:09:57.766 }, 00:09:57.766 { 00:09:57.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.766 "dma_device_type": 2 00:09:57.766 } 00:09:57.766 ], 00:09:57.766 "driver_specific": {} 00:09:57.766 } 00:09:57.766 ] 00:09:57.766 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.766 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:57.766 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:57.766 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:57.766 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:57.766 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.766 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.767 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.767 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.767 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:57.767 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.767 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.767 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.767 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.767 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.767 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.767 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.767 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.767 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.767 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.767 "name": "Existed_Raid", 00:09:57.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.767 "strip_size_kb": 64, 00:09:57.767 "state": "configuring", 00:09:57.767 "raid_level": "raid0", 00:09:57.767 "superblock": false, 00:09:57.767 "num_base_bdevs": 4, 00:09:57.767 "num_base_bdevs_discovered": 2, 00:09:57.767 "num_base_bdevs_operational": 4, 00:09:57.767 "base_bdevs_list": [ 00:09:57.767 { 00:09:57.767 "name": "BaseBdev1", 00:09:57.767 "uuid": "c906af72-6149-4b11-9201-b16aea6531b6", 00:09:57.767 "is_configured": true, 00:09:57.767 "data_offset": 0, 00:09:57.767 "data_size": 65536 00:09:57.767 }, 00:09:57.767 { 00:09:57.767 "name": "BaseBdev2", 00:09:57.767 "uuid": "00b4ba25-ee46-4945-8055-beae5355ef6f", 00:09:57.767 "is_configured": true, 00:09:57.767 "data_offset": 0, 00:09:57.767 "data_size": 65536 00:09:57.767 }, 00:09:57.767 { 00:09:57.767 "name": "BaseBdev3", 00:09:57.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.767 "is_configured": false, 00:09:57.767 "data_offset": 0, 00:09:57.767 "data_size": 0 00:09:57.767 }, 00:09:57.767 { 00:09:57.767 "name": "BaseBdev4", 00:09:57.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.767 "is_configured": false, 00:09:57.767 "data_offset": 0, 00:09:57.767 "data_size": 0 00:09:57.767 } 00:09:57.767 ] 00:09:57.767 }' 00:09:57.767 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.767 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.026 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:58.026 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.026 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.286 [2024-11-19 10:21:11.822224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:58.286 BaseBdev3 00:09:58.286 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.286 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:58.286 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:58.286 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.286 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:58.286 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.286 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.286 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.286 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.286 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.286 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.286 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:58.286 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.286 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.286 [ 00:09:58.286 { 00:09:58.286 "name": "BaseBdev3", 00:09:58.286 "aliases": [ 00:09:58.286 "f86693a7-230c-465d-8d68-b7e1fdaee6ce" 00:09:58.286 ], 00:09:58.286 "product_name": "Malloc disk", 00:09:58.286 "block_size": 512, 00:09:58.286 "num_blocks": 65536, 00:09:58.286 "uuid": "f86693a7-230c-465d-8d68-b7e1fdaee6ce", 00:09:58.286 "assigned_rate_limits": { 00:09:58.286 "rw_ios_per_sec": 0, 00:09:58.286 "rw_mbytes_per_sec": 0, 00:09:58.286 "r_mbytes_per_sec": 0, 00:09:58.286 "w_mbytes_per_sec": 0 00:09:58.286 }, 00:09:58.286 "claimed": true, 00:09:58.286 "claim_type": "exclusive_write", 00:09:58.286 "zoned": false, 00:09:58.286 "supported_io_types": { 00:09:58.286 "read": true, 00:09:58.286 "write": true, 00:09:58.286 "unmap": true, 00:09:58.286 "flush": true, 00:09:58.286 "reset": true, 00:09:58.286 "nvme_admin": false, 00:09:58.286 "nvme_io": false, 00:09:58.286 "nvme_io_md": false, 00:09:58.287 "write_zeroes": true, 00:09:58.287 "zcopy": true, 00:09:58.287 "get_zone_info": false, 00:09:58.287 "zone_management": false, 00:09:58.287 "zone_append": false, 00:09:58.287 "compare": false, 00:09:58.287 "compare_and_write": false, 00:09:58.287 "abort": true, 00:09:58.287 "seek_hole": false, 00:09:58.287 "seek_data": false, 00:09:58.287 "copy": true, 00:09:58.287 "nvme_iov_md": false 00:09:58.287 }, 00:09:58.287 "memory_domains": [ 00:09:58.287 { 00:09:58.287 "dma_device_id": "system", 00:09:58.287 "dma_device_type": 1 00:09:58.287 }, 00:09:58.287 { 00:09:58.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.287 "dma_device_type": 2 00:09:58.287 } 00:09:58.287 ], 00:09:58.287 "driver_specific": {} 00:09:58.287 } 00:09:58.287 ] 00:09:58.287 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.287 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:58.287 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:58.287 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:58.287 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:58.287 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.287 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.287 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.287 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.287 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.287 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.287 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.287 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.287 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.287 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.287 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.287 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.287 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.287 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.287 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.287 "name": "Existed_Raid", 00:09:58.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.287 "strip_size_kb": 64, 00:09:58.287 "state": "configuring", 00:09:58.287 "raid_level": "raid0", 00:09:58.287 "superblock": false, 00:09:58.287 "num_base_bdevs": 4, 00:09:58.287 "num_base_bdevs_discovered": 3, 00:09:58.287 "num_base_bdevs_operational": 4, 00:09:58.287 "base_bdevs_list": [ 00:09:58.287 { 00:09:58.287 "name": "BaseBdev1", 00:09:58.287 "uuid": "c906af72-6149-4b11-9201-b16aea6531b6", 00:09:58.287 "is_configured": true, 00:09:58.287 "data_offset": 0, 00:09:58.287 "data_size": 65536 00:09:58.287 }, 00:09:58.287 { 00:09:58.287 "name": "BaseBdev2", 00:09:58.287 "uuid": "00b4ba25-ee46-4945-8055-beae5355ef6f", 00:09:58.287 "is_configured": true, 00:09:58.287 "data_offset": 0, 00:09:58.287 "data_size": 65536 00:09:58.287 }, 00:09:58.287 { 00:09:58.287 "name": "BaseBdev3", 00:09:58.287 "uuid": "f86693a7-230c-465d-8d68-b7e1fdaee6ce", 00:09:58.287 "is_configured": true, 00:09:58.287 "data_offset": 0, 00:09:58.287 "data_size": 65536 00:09:58.287 }, 00:09:58.287 { 00:09:58.287 "name": "BaseBdev4", 00:09:58.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.287 "is_configured": false, 00:09:58.287 "data_offset": 0, 00:09:58.287 "data_size": 0 00:09:58.287 } 00:09:58.287 ] 00:09:58.287 }' 00:09:58.287 10:21:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.287 10:21:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.547 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:58.547 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.547 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.547 [2024-11-19 10:21:12.323401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:58.547 [2024-11-19 10:21:12.323442] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:58.547 [2024-11-19 10:21:12.323451] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:58.547 [2024-11-19 10:21:12.323709] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:58.547 [2024-11-19 10:21:12.323858] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:58.547 [2024-11-19 10:21:12.323872] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:58.547 [2024-11-19 10:21:12.324187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.808 BaseBdev4 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.808 [ 00:09:58.808 { 00:09:58.808 "name": "BaseBdev4", 00:09:58.808 "aliases": [ 00:09:58.808 "be5c966d-81dd-4725-b83b-ddcdaaaf387f" 00:09:58.808 ], 00:09:58.808 "product_name": "Malloc disk", 00:09:58.808 "block_size": 512, 00:09:58.808 "num_blocks": 65536, 00:09:58.808 "uuid": "be5c966d-81dd-4725-b83b-ddcdaaaf387f", 00:09:58.808 "assigned_rate_limits": { 00:09:58.808 "rw_ios_per_sec": 0, 00:09:58.808 "rw_mbytes_per_sec": 0, 00:09:58.808 "r_mbytes_per_sec": 0, 00:09:58.808 "w_mbytes_per_sec": 0 00:09:58.808 }, 00:09:58.808 "claimed": true, 00:09:58.808 "claim_type": "exclusive_write", 00:09:58.808 "zoned": false, 00:09:58.808 "supported_io_types": { 00:09:58.808 "read": true, 00:09:58.808 "write": true, 00:09:58.808 "unmap": true, 00:09:58.808 "flush": true, 00:09:58.808 "reset": true, 00:09:58.808 "nvme_admin": false, 00:09:58.808 "nvme_io": false, 00:09:58.808 "nvme_io_md": false, 00:09:58.808 "write_zeroes": true, 00:09:58.808 "zcopy": true, 00:09:58.808 "get_zone_info": false, 00:09:58.808 "zone_management": false, 00:09:58.808 "zone_append": false, 00:09:58.808 "compare": false, 00:09:58.808 "compare_and_write": false, 00:09:58.808 "abort": true, 00:09:58.808 "seek_hole": false, 00:09:58.808 "seek_data": false, 00:09:58.808 "copy": true, 00:09:58.808 "nvme_iov_md": false 00:09:58.808 }, 00:09:58.808 "memory_domains": [ 00:09:58.808 { 00:09:58.808 "dma_device_id": "system", 00:09:58.808 "dma_device_type": 1 00:09:58.808 }, 00:09:58.808 { 00:09:58.808 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.808 "dma_device_type": 2 00:09:58.808 } 00:09:58.808 ], 00:09:58.808 "driver_specific": {} 00:09:58.808 } 00:09:58.808 ] 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.808 "name": "Existed_Raid", 00:09:58.808 "uuid": "9c470581-4355-4d14-b0a2-3ffbadef6b21", 00:09:58.808 "strip_size_kb": 64, 00:09:58.808 "state": "online", 00:09:58.808 "raid_level": "raid0", 00:09:58.808 "superblock": false, 00:09:58.808 "num_base_bdevs": 4, 00:09:58.808 "num_base_bdevs_discovered": 4, 00:09:58.808 "num_base_bdevs_operational": 4, 00:09:58.808 "base_bdevs_list": [ 00:09:58.808 { 00:09:58.808 "name": "BaseBdev1", 00:09:58.808 "uuid": "c906af72-6149-4b11-9201-b16aea6531b6", 00:09:58.808 "is_configured": true, 00:09:58.808 "data_offset": 0, 00:09:58.808 "data_size": 65536 00:09:58.808 }, 00:09:58.808 { 00:09:58.808 "name": "BaseBdev2", 00:09:58.808 "uuid": "00b4ba25-ee46-4945-8055-beae5355ef6f", 00:09:58.808 "is_configured": true, 00:09:58.808 "data_offset": 0, 00:09:58.808 "data_size": 65536 00:09:58.808 }, 00:09:58.808 { 00:09:58.808 "name": "BaseBdev3", 00:09:58.808 "uuid": "f86693a7-230c-465d-8d68-b7e1fdaee6ce", 00:09:58.808 "is_configured": true, 00:09:58.808 "data_offset": 0, 00:09:58.808 "data_size": 65536 00:09:58.808 }, 00:09:58.808 { 00:09:58.808 "name": "BaseBdev4", 00:09:58.808 "uuid": "be5c966d-81dd-4725-b83b-ddcdaaaf387f", 00:09:58.808 "is_configured": true, 00:09:58.808 "data_offset": 0, 00:09:58.808 "data_size": 65536 00:09:58.808 } 00:09:58.808 ] 00:09:58.808 }' 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.808 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.067 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:59.067 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:59.067 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:59.067 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:59.067 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:59.067 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:59.067 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:59.067 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.067 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.067 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:59.067 [2024-11-19 10:21:12.802955] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:59.067 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.067 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:59.067 "name": "Existed_Raid", 00:09:59.067 "aliases": [ 00:09:59.067 "9c470581-4355-4d14-b0a2-3ffbadef6b21" 00:09:59.067 ], 00:09:59.067 "product_name": "Raid Volume", 00:09:59.067 "block_size": 512, 00:09:59.067 "num_blocks": 262144, 00:09:59.067 "uuid": "9c470581-4355-4d14-b0a2-3ffbadef6b21", 00:09:59.067 "assigned_rate_limits": { 00:09:59.067 "rw_ios_per_sec": 0, 00:09:59.067 "rw_mbytes_per_sec": 0, 00:09:59.067 "r_mbytes_per_sec": 0, 00:09:59.067 "w_mbytes_per_sec": 0 00:09:59.067 }, 00:09:59.067 "claimed": false, 00:09:59.067 "zoned": false, 00:09:59.067 "supported_io_types": { 00:09:59.067 "read": true, 00:09:59.067 "write": true, 00:09:59.067 "unmap": true, 00:09:59.067 "flush": true, 00:09:59.067 "reset": true, 00:09:59.067 "nvme_admin": false, 00:09:59.067 "nvme_io": false, 00:09:59.067 "nvme_io_md": false, 00:09:59.067 "write_zeroes": true, 00:09:59.067 "zcopy": false, 00:09:59.067 "get_zone_info": false, 00:09:59.067 "zone_management": false, 00:09:59.067 "zone_append": false, 00:09:59.067 "compare": false, 00:09:59.067 "compare_and_write": false, 00:09:59.067 "abort": false, 00:09:59.067 "seek_hole": false, 00:09:59.067 "seek_data": false, 00:09:59.067 "copy": false, 00:09:59.067 "nvme_iov_md": false 00:09:59.067 }, 00:09:59.067 "memory_domains": [ 00:09:59.067 { 00:09:59.067 "dma_device_id": "system", 00:09:59.067 "dma_device_type": 1 00:09:59.067 }, 00:09:59.067 { 00:09:59.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.067 "dma_device_type": 2 00:09:59.067 }, 00:09:59.067 { 00:09:59.067 "dma_device_id": "system", 00:09:59.068 "dma_device_type": 1 00:09:59.068 }, 00:09:59.068 { 00:09:59.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.068 "dma_device_type": 2 00:09:59.068 }, 00:09:59.068 { 00:09:59.068 "dma_device_id": "system", 00:09:59.068 "dma_device_type": 1 00:09:59.068 }, 00:09:59.068 { 00:09:59.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.068 "dma_device_type": 2 00:09:59.068 }, 00:09:59.068 { 00:09:59.068 "dma_device_id": "system", 00:09:59.068 "dma_device_type": 1 00:09:59.068 }, 00:09:59.068 { 00:09:59.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.068 "dma_device_type": 2 00:09:59.068 } 00:09:59.068 ], 00:09:59.068 "driver_specific": { 00:09:59.068 "raid": { 00:09:59.068 "uuid": "9c470581-4355-4d14-b0a2-3ffbadef6b21", 00:09:59.068 "strip_size_kb": 64, 00:09:59.068 "state": "online", 00:09:59.068 "raid_level": "raid0", 00:09:59.068 "superblock": false, 00:09:59.068 "num_base_bdevs": 4, 00:09:59.068 "num_base_bdevs_discovered": 4, 00:09:59.068 "num_base_bdevs_operational": 4, 00:09:59.068 "base_bdevs_list": [ 00:09:59.068 { 00:09:59.068 "name": "BaseBdev1", 00:09:59.068 "uuid": "c906af72-6149-4b11-9201-b16aea6531b6", 00:09:59.068 "is_configured": true, 00:09:59.068 "data_offset": 0, 00:09:59.068 "data_size": 65536 00:09:59.068 }, 00:09:59.068 { 00:09:59.068 "name": "BaseBdev2", 00:09:59.068 "uuid": "00b4ba25-ee46-4945-8055-beae5355ef6f", 00:09:59.068 "is_configured": true, 00:09:59.068 "data_offset": 0, 00:09:59.068 "data_size": 65536 00:09:59.068 }, 00:09:59.068 { 00:09:59.068 "name": "BaseBdev3", 00:09:59.068 "uuid": "f86693a7-230c-465d-8d68-b7e1fdaee6ce", 00:09:59.068 "is_configured": true, 00:09:59.068 "data_offset": 0, 00:09:59.068 "data_size": 65536 00:09:59.068 }, 00:09:59.068 { 00:09:59.068 "name": "BaseBdev4", 00:09:59.068 "uuid": "be5c966d-81dd-4725-b83b-ddcdaaaf387f", 00:09:59.068 "is_configured": true, 00:09:59.068 "data_offset": 0, 00:09:59.068 "data_size": 65536 00:09:59.068 } 00:09:59.068 ] 00:09:59.068 } 00:09:59.068 } 00:09:59.068 }' 00:09:59.068 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:59.326 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:59.326 BaseBdev2 00:09:59.327 BaseBdev3 00:09:59.327 BaseBdev4' 00:09:59.327 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.327 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:59.327 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.327 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:59.327 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.327 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.327 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.327 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.327 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.327 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.327 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.327 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:59.327 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.327 10:21:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.327 10:21:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.327 10:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.327 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.327 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.327 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.327 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:59.327 10:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.327 10:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.327 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.327 10:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.327 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.327 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.327 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.327 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:59.327 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.327 10:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.327 10:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.586 10:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.586 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.586 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.586 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:59.586 10:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.586 10:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.586 [2024-11-19 10:21:13.146106] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:59.586 [2024-11-19 10:21:13.146136] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:59.586 [2024-11-19 10:21:13.146184] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.586 10:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.586 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:59.586 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:59.586 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:59.586 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:59.586 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:59.586 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:59.586 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.586 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:59.586 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.586 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.586 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.586 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.586 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.586 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.586 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.586 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.586 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.586 10:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.586 10:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.586 10:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.586 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.586 "name": "Existed_Raid", 00:09:59.586 "uuid": "9c470581-4355-4d14-b0a2-3ffbadef6b21", 00:09:59.586 "strip_size_kb": 64, 00:09:59.586 "state": "offline", 00:09:59.586 "raid_level": "raid0", 00:09:59.586 "superblock": false, 00:09:59.586 "num_base_bdevs": 4, 00:09:59.586 "num_base_bdevs_discovered": 3, 00:09:59.586 "num_base_bdevs_operational": 3, 00:09:59.586 "base_bdevs_list": [ 00:09:59.586 { 00:09:59.586 "name": null, 00:09:59.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.586 "is_configured": false, 00:09:59.586 "data_offset": 0, 00:09:59.586 "data_size": 65536 00:09:59.586 }, 00:09:59.586 { 00:09:59.586 "name": "BaseBdev2", 00:09:59.586 "uuid": "00b4ba25-ee46-4945-8055-beae5355ef6f", 00:09:59.586 "is_configured": true, 00:09:59.586 "data_offset": 0, 00:09:59.586 "data_size": 65536 00:09:59.586 }, 00:09:59.586 { 00:09:59.586 "name": "BaseBdev3", 00:09:59.586 "uuid": "f86693a7-230c-465d-8d68-b7e1fdaee6ce", 00:09:59.586 "is_configured": true, 00:09:59.586 "data_offset": 0, 00:09:59.586 "data_size": 65536 00:09:59.586 }, 00:09:59.586 { 00:09:59.586 "name": "BaseBdev4", 00:09:59.586 "uuid": "be5c966d-81dd-4725-b83b-ddcdaaaf387f", 00:09:59.586 "is_configured": true, 00:09:59.586 "data_offset": 0, 00:09:59.586 "data_size": 65536 00:09:59.586 } 00:09:59.586 ] 00:09:59.586 }' 00:09:59.586 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.586 10:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.156 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:00.156 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:00.156 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.156 10:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.156 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:00.156 10:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.156 10:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.156 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:00.156 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:00.156 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:00.156 10:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.156 10:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.156 [2024-11-19 10:21:13.712044] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:00.156 10:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.156 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:00.156 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:00.156 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.156 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:00.156 10:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.156 10:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.156 10:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.156 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:00.156 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:00.156 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:00.156 10:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.156 10:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.156 [2024-11-19 10:21:13.862685] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:00.415 10:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.415 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:00.415 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:00.415 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.415 10:21:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:00.415 10:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.415 10:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.415 10:21:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.415 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:00.415 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:00.415 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:00.415 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.415 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.415 [2024-11-19 10:21:14.015608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:00.415 [2024-11-19 10:21:14.015703] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:00.415 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.415 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:00.415 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:00.415 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.415 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:00.415 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.415 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.415 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.415 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:00.415 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:00.415 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:00.415 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:00.415 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:00.415 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:00.415 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.415 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.675 BaseBdev2 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.675 [ 00:10:00.675 { 00:10:00.675 "name": "BaseBdev2", 00:10:00.675 "aliases": [ 00:10:00.675 "9339498b-693e-4ac6-825a-719160fdf04c" 00:10:00.675 ], 00:10:00.675 "product_name": "Malloc disk", 00:10:00.675 "block_size": 512, 00:10:00.675 "num_blocks": 65536, 00:10:00.675 "uuid": "9339498b-693e-4ac6-825a-719160fdf04c", 00:10:00.675 "assigned_rate_limits": { 00:10:00.675 "rw_ios_per_sec": 0, 00:10:00.675 "rw_mbytes_per_sec": 0, 00:10:00.675 "r_mbytes_per_sec": 0, 00:10:00.675 "w_mbytes_per_sec": 0 00:10:00.675 }, 00:10:00.675 "claimed": false, 00:10:00.675 "zoned": false, 00:10:00.675 "supported_io_types": { 00:10:00.675 "read": true, 00:10:00.675 "write": true, 00:10:00.675 "unmap": true, 00:10:00.675 "flush": true, 00:10:00.675 "reset": true, 00:10:00.675 "nvme_admin": false, 00:10:00.675 "nvme_io": false, 00:10:00.675 "nvme_io_md": false, 00:10:00.675 "write_zeroes": true, 00:10:00.675 "zcopy": true, 00:10:00.675 "get_zone_info": false, 00:10:00.675 "zone_management": false, 00:10:00.675 "zone_append": false, 00:10:00.675 "compare": false, 00:10:00.675 "compare_and_write": false, 00:10:00.675 "abort": true, 00:10:00.675 "seek_hole": false, 00:10:00.675 "seek_data": false, 00:10:00.675 "copy": true, 00:10:00.675 "nvme_iov_md": false 00:10:00.675 }, 00:10:00.675 "memory_domains": [ 00:10:00.675 { 00:10:00.675 "dma_device_id": "system", 00:10:00.675 "dma_device_type": 1 00:10:00.675 }, 00:10:00.675 { 00:10:00.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.675 "dma_device_type": 2 00:10:00.675 } 00:10:00.675 ], 00:10:00.675 "driver_specific": {} 00:10:00.675 } 00:10:00.675 ] 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.675 BaseBdev3 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.675 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.675 [ 00:10:00.675 { 00:10:00.675 "name": "BaseBdev3", 00:10:00.675 "aliases": [ 00:10:00.675 "bd3d8c1d-40e0-44cd-b1ca-b3d338c206ba" 00:10:00.675 ], 00:10:00.675 "product_name": "Malloc disk", 00:10:00.675 "block_size": 512, 00:10:00.675 "num_blocks": 65536, 00:10:00.675 "uuid": "bd3d8c1d-40e0-44cd-b1ca-b3d338c206ba", 00:10:00.675 "assigned_rate_limits": { 00:10:00.675 "rw_ios_per_sec": 0, 00:10:00.675 "rw_mbytes_per_sec": 0, 00:10:00.675 "r_mbytes_per_sec": 0, 00:10:00.675 "w_mbytes_per_sec": 0 00:10:00.675 }, 00:10:00.676 "claimed": false, 00:10:00.676 "zoned": false, 00:10:00.676 "supported_io_types": { 00:10:00.676 "read": true, 00:10:00.676 "write": true, 00:10:00.676 "unmap": true, 00:10:00.676 "flush": true, 00:10:00.676 "reset": true, 00:10:00.676 "nvme_admin": false, 00:10:00.676 "nvme_io": false, 00:10:00.676 "nvme_io_md": false, 00:10:00.676 "write_zeroes": true, 00:10:00.676 "zcopy": true, 00:10:00.676 "get_zone_info": false, 00:10:00.676 "zone_management": false, 00:10:00.676 "zone_append": false, 00:10:00.676 "compare": false, 00:10:00.676 "compare_and_write": false, 00:10:00.676 "abort": true, 00:10:00.676 "seek_hole": false, 00:10:00.676 "seek_data": false, 00:10:00.676 "copy": true, 00:10:00.676 "nvme_iov_md": false 00:10:00.676 }, 00:10:00.676 "memory_domains": [ 00:10:00.676 { 00:10:00.676 "dma_device_id": "system", 00:10:00.676 "dma_device_type": 1 00:10:00.676 }, 00:10:00.676 { 00:10:00.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.676 "dma_device_type": 2 00:10:00.676 } 00:10:00.676 ], 00:10:00.676 "driver_specific": {} 00:10:00.676 } 00:10:00.676 ] 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.676 BaseBdev4 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.676 [ 00:10:00.676 { 00:10:00.676 "name": "BaseBdev4", 00:10:00.676 "aliases": [ 00:10:00.676 "cc4b1f00-9baa-470e-bf82-9823d69f9374" 00:10:00.676 ], 00:10:00.676 "product_name": "Malloc disk", 00:10:00.676 "block_size": 512, 00:10:00.676 "num_blocks": 65536, 00:10:00.676 "uuid": "cc4b1f00-9baa-470e-bf82-9823d69f9374", 00:10:00.676 "assigned_rate_limits": { 00:10:00.676 "rw_ios_per_sec": 0, 00:10:00.676 "rw_mbytes_per_sec": 0, 00:10:00.676 "r_mbytes_per_sec": 0, 00:10:00.676 "w_mbytes_per_sec": 0 00:10:00.676 }, 00:10:00.676 "claimed": false, 00:10:00.676 "zoned": false, 00:10:00.676 "supported_io_types": { 00:10:00.676 "read": true, 00:10:00.676 "write": true, 00:10:00.676 "unmap": true, 00:10:00.676 "flush": true, 00:10:00.676 "reset": true, 00:10:00.676 "nvme_admin": false, 00:10:00.676 "nvme_io": false, 00:10:00.676 "nvme_io_md": false, 00:10:00.676 "write_zeroes": true, 00:10:00.676 "zcopy": true, 00:10:00.676 "get_zone_info": false, 00:10:00.676 "zone_management": false, 00:10:00.676 "zone_append": false, 00:10:00.676 "compare": false, 00:10:00.676 "compare_and_write": false, 00:10:00.676 "abort": true, 00:10:00.676 "seek_hole": false, 00:10:00.676 "seek_data": false, 00:10:00.676 "copy": true, 00:10:00.676 "nvme_iov_md": false 00:10:00.676 }, 00:10:00.676 "memory_domains": [ 00:10:00.676 { 00:10:00.676 "dma_device_id": "system", 00:10:00.676 "dma_device_type": 1 00:10:00.676 }, 00:10:00.676 { 00:10:00.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.676 "dma_device_type": 2 00:10:00.676 } 00:10:00.676 ], 00:10:00.676 "driver_specific": {} 00:10:00.676 } 00:10:00.676 ] 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.676 [2024-11-19 10:21:14.399984] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:00.676 [2024-11-19 10:21:14.400094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:00.676 [2024-11-19 10:21:14.400134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:00.676 [2024-11-19 10:21:14.401860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:00.676 [2024-11-19 10:21:14.401979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.676 "name": "Existed_Raid", 00:10:00.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.676 "strip_size_kb": 64, 00:10:00.676 "state": "configuring", 00:10:00.676 "raid_level": "raid0", 00:10:00.676 "superblock": false, 00:10:00.676 "num_base_bdevs": 4, 00:10:00.676 "num_base_bdevs_discovered": 3, 00:10:00.676 "num_base_bdevs_operational": 4, 00:10:00.676 "base_bdevs_list": [ 00:10:00.676 { 00:10:00.676 "name": "BaseBdev1", 00:10:00.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.676 "is_configured": false, 00:10:00.676 "data_offset": 0, 00:10:00.676 "data_size": 0 00:10:00.676 }, 00:10:00.676 { 00:10:00.676 "name": "BaseBdev2", 00:10:00.676 "uuid": "9339498b-693e-4ac6-825a-719160fdf04c", 00:10:00.676 "is_configured": true, 00:10:00.676 "data_offset": 0, 00:10:00.676 "data_size": 65536 00:10:00.676 }, 00:10:00.676 { 00:10:00.676 "name": "BaseBdev3", 00:10:00.676 "uuid": "bd3d8c1d-40e0-44cd-b1ca-b3d338c206ba", 00:10:00.676 "is_configured": true, 00:10:00.676 "data_offset": 0, 00:10:00.676 "data_size": 65536 00:10:00.676 }, 00:10:00.676 { 00:10:00.676 "name": "BaseBdev4", 00:10:00.676 "uuid": "cc4b1f00-9baa-470e-bf82-9823d69f9374", 00:10:00.676 "is_configured": true, 00:10:00.676 "data_offset": 0, 00:10:00.676 "data_size": 65536 00:10:00.676 } 00:10:00.676 ] 00:10:00.676 }' 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.676 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.244 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:01.244 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.244 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.244 [2024-11-19 10:21:14.795312] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:01.244 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.245 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:01.245 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.245 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.245 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.245 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.245 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.245 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.245 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.245 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.245 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.245 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.245 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.245 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.245 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.245 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.245 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.245 "name": "Existed_Raid", 00:10:01.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.245 "strip_size_kb": 64, 00:10:01.245 "state": "configuring", 00:10:01.245 "raid_level": "raid0", 00:10:01.245 "superblock": false, 00:10:01.245 "num_base_bdevs": 4, 00:10:01.245 "num_base_bdevs_discovered": 2, 00:10:01.245 "num_base_bdevs_operational": 4, 00:10:01.245 "base_bdevs_list": [ 00:10:01.245 { 00:10:01.245 "name": "BaseBdev1", 00:10:01.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.245 "is_configured": false, 00:10:01.245 "data_offset": 0, 00:10:01.245 "data_size": 0 00:10:01.245 }, 00:10:01.245 { 00:10:01.245 "name": null, 00:10:01.245 "uuid": "9339498b-693e-4ac6-825a-719160fdf04c", 00:10:01.245 "is_configured": false, 00:10:01.245 "data_offset": 0, 00:10:01.245 "data_size": 65536 00:10:01.245 }, 00:10:01.245 { 00:10:01.245 "name": "BaseBdev3", 00:10:01.245 "uuid": "bd3d8c1d-40e0-44cd-b1ca-b3d338c206ba", 00:10:01.245 "is_configured": true, 00:10:01.245 "data_offset": 0, 00:10:01.245 "data_size": 65536 00:10:01.245 }, 00:10:01.245 { 00:10:01.245 "name": "BaseBdev4", 00:10:01.245 "uuid": "cc4b1f00-9baa-470e-bf82-9823d69f9374", 00:10:01.245 "is_configured": true, 00:10:01.245 "data_offset": 0, 00:10:01.245 "data_size": 65536 00:10:01.245 } 00:10:01.245 ] 00:10:01.245 }' 00:10:01.245 10:21:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.245 10:21:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.504 [2024-11-19 10:21:15.238279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:01.504 BaseBdev1 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.504 [ 00:10:01.504 { 00:10:01.504 "name": "BaseBdev1", 00:10:01.504 "aliases": [ 00:10:01.504 "45fb1d53-e3d2-4fd3-84aa-fbce3ffddc27" 00:10:01.504 ], 00:10:01.504 "product_name": "Malloc disk", 00:10:01.504 "block_size": 512, 00:10:01.504 "num_blocks": 65536, 00:10:01.504 "uuid": "45fb1d53-e3d2-4fd3-84aa-fbce3ffddc27", 00:10:01.504 "assigned_rate_limits": { 00:10:01.504 "rw_ios_per_sec": 0, 00:10:01.504 "rw_mbytes_per_sec": 0, 00:10:01.504 "r_mbytes_per_sec": 0, 00:10:01.504 "w_mbytes_per_sec": 0 00:10:01.504 }, 00:10:01.504 "claimed": true, 00:10:01.504 "claim_type": "exclusive_write", 00:10:01.504 "zoned": false, 00:10:01.504 "supported_io_types": { 00:10:01.504 "read": true, 00:10:01.504 "write": true, 00:10:01.504 "unmap": true, 00:10:01.504 "flush": true, 00:10:01.504 "reset": true, 00:10:01.504 "nvme_admin": false, 00:10:01.504 "nvme_io": false, 00:10:01.504 "nvme_io_md": false, 00:10:01.504 "write_zeroes": true, 00:10:01.504 "zcopy": true, 00:10:01.504 "get_zone_info": false, 00:10:01.504 "zone_management": false, 00:10:01.504 "zone_append": false, 00:10:01.504 "compare": false, 00:10:01.504 "compare_and_write": false, 00:10:01.504 "abort": true, 00:10:01.504 "seek_hole": false, 00:10:01.504 "seek_data": false, 00:10:01.504 "copy": true, 00:10:01.504 "nvme_iov_md": false 00:10:01.504 }, 00:10:01.504 "memory_domains": [ 00:10:01.504 { 00:10:01.504 "dma_device_id": "system", 00:10:01.504 "dma_device_type": 1 00:10:01.504 }, 00:10:01.504 { 00:10:01.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.504 "dma_device_type": 2 00:10:01.504 } 00:10:01.504 ], 00:10:01.504 "driver_specific": {} 00:10:01.504 } 00:10:01.504 ] 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.504 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.505 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.505 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.505 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.763 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.763 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.763 "name": "Existed_Raid", 00:10:01.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.763 "strip_size_kb": 64, 00:10:01.763 "state": "configuring", 00:10:01.763 "raid_level": "raid0", 00:10:01.763 "superblock": false, 00:10:01.763 "num_base_bdevs": 4, 00:10:01.763 "num_base_bdevs_discovered": 3, 00:10:01.763 "num_base_bdevs_operational": 4, 00:10:01.763 "base_bdevs_list": [ 00:10:01.764 { 00:10:01.764 "name": "BaseBdev1", 00:10:01.764 "uuid": "45fb1d53-e3d2-4fd3-84aa-fbce3ffddc27", 00:10:01.764 "is_configured": true, 00:10:01.764 "data_offset": 0, 00:10:01.764 "data_size": 65536 00:10:01.764 }, 00:10:01.764 { 00:10:01.764 "name": null, 00:10:01.764 "uuid": "9339498b-693e-4ac6-825a-719160fdf04c", 00:10:01.764 "is_configured": false, 00:10:01.764 "data_offset": 0, 00:10:01.764 "data_size": 65536 00:10:01.764 }, 00:10:01.764 { 00:10:01.764 "name": "BaseBdev3", 00:10:01.764 "uuid": "bd3d8c1d-40e0-44cd-b1ca-b3d338c206ba", 00:10:01.764 "is_configured": true, 00:10:01.764 "data_offset": 0, 00:10:01.764 "data_size": 65536 00:10:01.764 }, 00:10:01.764 { 00:10:01.764 "name": "BaseBdev4", 00:10:01.764 "uuid": "cc4b1f00-9baa-470e-bf82-9823d69f9374", 00:10:01.764 "is_configured": true, 00:10:01.764 "data_offset": 0, 00:10:01.764 "data_size": 65536 00:10:01.764 } 00:10:01.764 ] 00:10:01.764 }' 00:10:01.764 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.764 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.022 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:02.022 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.022 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.022 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.022 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.022 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:02.022 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:02.022 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.022 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.022 [2024-11-19 10:21:15.745475] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:02.022 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.022 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:02.022 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.022 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.022 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.022 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.022 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.022 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.023 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.023 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.023 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.023 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.023 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.023 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.023 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.023 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.023 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.023 "name": "Existed_Raid", 00:10:02.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.023 "strip_size_kb": 64, 00:10:02.023 "state": "configuring", 00:10:02.023 "raid_level": "raid0", 00:10:02.023 "superblock": false, 00:10:02.023 "num_base_bdevs": 4, 00:10:02.023 "num_base_bdevs_discovered": 2, 00:10:02.023 "num_base_bdevs_operational": 4, 00:10:02.023 "base_bdevs_list": [ 00:10:02.023 { 00:10:02.023 "name": "BaseBdev1", 00:10:02.023 "uuid": "45fb1d53-e3d2-4fd3-84aa-fbce3ffddc27", 00:10:02.023 "is_configured": true, 00:10:02.023 "data_offset": 0, 00:10:02.023 "data_size": 65536 00:10:02.023 }, 00:10:02.023 { 00:10:02.023 "name": null, 00:10:02.023 "uuid": "9339498b-693e-4ac6-825a-719160fdf04c", 00:10:02.023 "is_configured": false, 00:10:02.023 "data_offset": 0, 00:10:02.023 "data_size": 65536 00:10:02.023 }, 00:10:02.023 { 00:10:02.023 "name": null, 00:10:02.023 "uuid": "bd3d8c1d-40e0-44cd-b1ca-b3d338c206ba", 00:10:02.023 "is_configured": false, 00:10:02.023 "data_offset": 0, 00:10:02.023 "data_size": 65536 00:10:02.023 }, 00:10:02.023 { 00:10:02.023 "name": "BaseBdev4", 00:10:02.023 "uuid": "cc4b1f00-9baa-470e-bf82-9823d69f9374", 00:10:02.023 "is_configured": true, 00:10:02.023 "data_offset": 0, 00:10:02.023 "data_size": 65536 00:10:02.023 } 00:10:02.023 ] 00:10:02.023 }' 00:10:02.023 10:21:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.023 10:21:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.591 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:02.591 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.591 10:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.591 10:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.591 10:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.591 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:02.591 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:02.591 10:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.591 10:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.591 [2024-11-19 10:21:16.236657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:02.591 10:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.591 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:02.591 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.591 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.591 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.591 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.591 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.591 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.591 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.591 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.591 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.591 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.591 10:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.591 10:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.591 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.591 10:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.591 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.591 "name": "Existed_Raid", 00:10:02.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.591 "strip_size_kb": 64, 00:10:02.591 "state": "configuring", 00:10:02.591 "raid_level": "raid0", 00:10:02.591 "superblock": false, 00:10:02.591 "num_base_bdevs": 4, 00:10:02.591 "num_base_bdevs_discovered": 3, 00:10:02.591 "num_base_bdevs_operational": 4, 00:10:02.591 "base_bdevs_list": [ 00:10:02.591 { 00:10:02.591 "name": "BaseBdev1", 00:10:02.591 "uuid": "45fb1d53-e3d2-4fd3-84aa-fbce3ffddc27", 00:10:02.591 "is_configured": true, 00:10:02.591 "data_offset": 0, 00:10:02.591 "data_size": 65536 00:10:02.591 }, 00:10:02.591 { 00:10:02.591 "name": null, 00:10:02.591 "uuid": "9339498b-693e-4ac6-825a-719160fdf04c", 00:10:02.591 "is_configured": false, 00:10:02.591 "data_offset": 0, 00:10:02.591 "data_size": 65536 00:10:02.591 }, 00:10:02.591 { 00:10:02.591 "name": "BaseBdev3", 00:10:02.591 "uuid": "bd3d8c1d-40e0-44cd-b1ca-b3d338c206ba", 00:10:02.591 "is_configured": true, 00:10:02.591 "data_offset": 0, 00:10:02.591 "data_size": 65536 00:10:02.591 }, 00:10:02.591 { 00:10:02.591 "name": "BaseBdev4", 00:10:02.591 "uuid": "cc4b1f00-9baa-470e-bf82-9823d69f9374", 00:10:02.591 "is_configured": true, 00:10:02.591 "data_offset": 0, 00:10:02.591 "data_size": 65536 00:10:02.591 } 00:10:02.591 ] 00:10:02.591 }' 00:10:02.591 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.591 10:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.159 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.159 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:03.159 10:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.159 10:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.159 10:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.159 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:03.159 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:03.159 10:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.159 10:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.159 [2024-11-19 10:21:16.679932] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:03.159 10:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.159 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:03.159 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.159 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.159 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.159 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.159 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.159 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.159 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.159 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.159 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.159 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.159 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.159 10:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.159 10:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.159 10:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.159 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.159 "name": "Existed_Raid", 00:10:03.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.159 "strip_size_kb": 64, 00:10:03.159 "state": "configuring", 00:10:03.159 "raid_level": "raid0", 00:10:03.159 "superblock": false, 00:10:03.159 "num_base_bdevs": 4, 00:10:03.159 "num_base_bdevs_discovered": 2, 00:10:03.159 "num_base_bdevs_operational": 4, 00:10:03.159 "base_bdevs_list": [ 00:10:03.159 { 00:10:03.159 "name": null, 00:10:03.159 "uuid": "45fb1d53-e3d2-4fd3-84aa-fbce3ffddc27", 00:10:03.159 "is_configured": false, 00:10:03.159 "data_offset": 0, 00:10:03.159 "data_size": 65536 00:10:03.159 }, 00:10:03.159 { 00:10:03.159 "name": null, 00:10:03.159 "uuid": "9339498b-693e-4ac6-825a-719160fdf04c", 00:10:03.159 "is_configured": false, 00:10:03.159 "data_offset": 0, 00:10:03.159 "data_size": 65536 00:10:03.159 }, 00:10:03.159 { 00:10:03.159 "name": "BaseBdev3", 00:10:03.159 "uuid": "bd3d8c1d-40e0-44cd-b1ca-b3d338c206ba", 00:10:03.159 "is_configured": true, 00:10:03.159 "data_offset": 0, 00:10:03.159 "data_size": 65536 00:10:03.159 }, 00:10:03.160 { 00:10:03.160 "name": "BaseBdev4", 00:10:03.160 "uuid": "cc4b1f00-9baa-470e-bf82-9823d69f9374", 00:10:03.160 "is_configured": true, 00:10:03.160 "data_offset": 0, 00:10:03.160 "data_size": 65536 00:10:03.160 } 00:10:03.160 ] 00:10:03.160 }' 00:10:03.160 10:21:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.160 10:21:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.418 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:03.418 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.418 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.418 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.678 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.678 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:03.678 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:03.678 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.678 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.678 [2024-11-19 10:21:17.229282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.678 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.678 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:03.678 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.678 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.678 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.678 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.678 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.678 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.678 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.678 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.678 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.678 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.678 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.678 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.678 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.678 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.678 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.678 "name": "Existed_Raid", 00:10:03.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.678 "strip_size_kb": 64, 00:10:03.678 "state": "configuring", 00:10:03.678 "raid_level": "raid0", 00:10:03.678 "superblock": false, 00:10:03.678 "num_base_bdevs": 4, 00:10:03.678 "num_base_bdevs_discovered": 3, 00:10:03.678 "num_base_bdevs_operational": 4, 00:10:03.678 "base_bdevs_list": [ 00:10:03.678 { 00:10:03.678 "name": null, 00:10:03.678 "uuid": "45fb1d53-e3d2-4fd3-84aa-fbce3ffddc27", 00:10:03.678 "is_configured": false, 00:10:03.678 "data_offset": 0, 00:10:03.678 "data_size": 65536 00:10:03.678 }, 00:10:03.678 { 00:10:03.678 "name": "BaseBdev2", 00:10:03.678 "uuid": "9339498b-693e-4ac6-825a-719160fdf04c", 00:10:03.678 "is_configured": true, 00:10:03.678 "data_offset": 0, 00:10:03.678 "data_size": 65536 00:10:03.678 }, 00:10:03.678 { 00:10:03.678 "name": "BaseBdev3", 00:10:03.678 "uuid": "bd3d8c1d-40e0-44cd-b1ca-b3d338c206ba", 00:10:03.678 "is_configured": true, 00:10:03.678 "data_offset": 0, 00:10:03.678 "data_size": 65536 00:10:03.678 }, 00:10:03.678 { 00:10:03.678 "name": "BaseBdev4", 00:10:03.678 "uuid": "cc4b1f00-9baa-470e-bf82-9823d69f9374", 00:10:03.678 "is_configured": true, 00:10:03.678 "data_offset": 0, 00:10:03.678 "data_size": 65536 00:10:03.678 } 00:10:03.678 ] 00:10:03.678 }' 00:10:03.678 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.678 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.936 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.936 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.936 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.936 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:03.936 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.936 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:03.936 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.936 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:03.936 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.936 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.195 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 45fb1d53-e3d2-4fd3-84aa-fbce3ffddc27 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.196 [2024-11-19 10:21:17.783991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:04.196 [2024-11-19 10:21:17.784106] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:04.196 [2024-11-19 10:21:17.784132] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:04.196 [2024-11-19 10:21:17.784426] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:04.196 [2024-11-19 10:21:17.784606] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:04.196 [2024-11-19 10:21:17.784651] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:04.196 [2024-11-19 10:21:17.784927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.196 NewBaseBdev 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.196 [ 00:10:04.196 { 00:10:04.196 "name": "NewBaseBdev", 00:10:04.196 "aliases": [ 00:10:04.196 "45fb1d53-e3d2-4fd3-84aa-fbce3ffddc27" 00:10:04.196 ], 00:10:04.196 "product_name": "Malloc disk", 00:10:04.196 "block_size": 512, 00:10:04.196 "num_blocks": 65536, 00:10:04.196 "uuid": "45fb1d53-e3d2-4fd3-84aa-fbce3ffddc27", 00:10:04.196 "assigned_rate_limits": { 00:10:04.196 "rw_ios_per_sec": 0, 00:10:04.196 "rw_mbytes_per_sec": 0, 00:10:04.196 "r_mbytes_per_sec": 0, 00:10:04.196 "w_mbytes_per_sec": 0 00:10:04.196 }, 00:10:04.196 "claimed": true, 00:10:04.196 "claim_type": "exclusive_write", 00:10:04.196 "zoned": false, 00:10:04.196 "supported_io_types": { 00:10:04.196 "read": true, 00:10:04.196 "write": true, 00:10:04.196 "unmap": true, 00:10:04.196 "flush": true, 00:10:04.196 "reset": true, 00:10:04.196 "nvme_admin": false, 00:10:04.196 "nvme_io": false, 00:10:04.196 "nvme_io_md": false, 00:10:04.196 "write_zeroes": true, 00:10:04.196 "zcopy": true, 00:10:04.196 "get_zone_info": false, 00:10:04.196 "zone_management": false, 00:10:04.196 "zone_append": false, 00:10:04.196 "compare": false, 00:10:04.196 "compare_and_write": false, 00:10:04.196 "abort": true, 00:10:04.196 "seek_hole": false, 00:10:04.196 "seek_data": false, 00:10:04.196 "copy": true, 00:10:04.196 "nvme_iov_md": false 00:10:04.196 }, 00:10:04.196 "memory_domains": [ 00:10:04.196 { 00:10:04.196 "dma_device_id": "system", 00:10:04.196 "dma_device_type": 1 00:10:04.196 }, 00:10:04.196 { 00:10:04.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.196 "dma_device_type": 2 00:10:04.196 } 00:10:04.196 ], 00:10:04.196 "driver_specific": {} 00:10:04.196 } 00:10:04.196 ] 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.196 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.196 "name": "Existed_Raid", 00:10:04.196 "uuid": "8d4f347d-3c79-491e-b462-152402d1922e", 00:10:04.196 "strip_size_kb": 64, 00:10:04.196 "state": "online", 00:10:04.196 "raid_level": "raid0", 00:10:04.196 "superblock": false, 00:10:04.196 "num_base_bdevs": 4, 00:10:04.196 "num_base_bdevs_discovered": 4, 00:10:04.196 "num_base_bdevs_operational": 4, 00:10:04.196 "base_bdevs_list": [ 00:10:04.196 { 00:10:04.196 "name": "NewBaseBdev", 00:10:04.196 "uuid": "45fb1d53-e3d2-4fd3-84aa-fbce3ffddc27", 00:10:04.196 "is_configured": true, 00:10:04.196 "data_offset": 0, 00:10:04.196 "data_size": 65536 00:10:04.196 }, 00:10:04.196 { 00:10:04.196 "name": "BaseBdev2", 00:10:04.196 "uuid": "9339498b-693e-4ac6-825a-719160fdf04c", 00:10:04.196 "is_configured": true, 00:10:04.196 "data_offset": 0, 00:10:04.196 "data_size": 65536 00:10:04.196 }, 00:10:04.196 { 00:10:04.196 "name": "BaseBdev3", 00:10:04.196 "uuid": "bd3d8c1d-40e0-44cd-b1ca-b3d338c206ba", 00:10:04.196 "is_configured": true, 00:10:04.196 "data_offset": 0, 00:10:04.196 "data_size": 65536 00:10:04.196 }, 00:10:04.196 { 00:10:04.196 "name": "BaseBdev4", 00:10:04.196 "uuid": "cc4b1f00-9baa-470e-bf82-9823d69f9374", 00:10:04.196 "is_configured": true, 00:10:04.196 "data_offset": 0, 00:10:04.197 "data_size": 65536 00:10:04.197 } 00:10:04.197 ] 00:10:04.197 }' 00:10:04.197 10:21:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.197 10:21:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.765 [2024-11-19 10:21:18.271552] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:04.765 "name": "Existed_Raid", 00:10:04.765 "aliases": [ 00:10:04.765 "8d4f347d-3c79-491e-b462-152402d1922e" 00:10:04.765 ], 00:10:04.765 "product_name": "Raid Volume", 00:10:04.765 "block_size": 512, 00:10:04.765 "num_blocks": 262144, 00:10:04.765 "uuid": "8d4f347d-3c79-491e-b462-152402d1922e", 00:10:04.765 "assigned_rate_limits": { 00:10:04.765 "rw_ios_per_sec": 0, 00:10:04.765 "rw_mbytes_per_sec": 0, 00:10:04.765 "r_mbytes_per_sec": 0, 00:10:04.765 "w_mbytes_per_sec": 0 00:10:04.765 }, 00:10:04.765 "claimed": false, 00:10:04.765 "zoned": false, 00:10:04.765 "supported_io_types": { 00:10:04.765 "read": true, 00:10:04.765 "write": true, 00:10:04.765 "unmap": true, 00:10:04.765 "flush": true, 00:10:04.765 "reset": true, 00:10:04.765 "nvme_admin": false, 00:10:04.765 "nvme_io": false, 00:10:04.765 "nvme_io_md": false, 00:10:04.765 "write_zeroes": true, 00:10:04.765 "zcopy": false, 00:10:04.765 "get_zone_info": false, 00:10:04.765 "zone_management": false, 00:10:04.765 "zone_append": false, 00:10:04.765 "compare": false, 00:10:04.765 "compare_and_write": false, 00:10:04.765 "abort": false, 00:10:04.765 "seek_hole": false, 00:10:04.765 "seek_data": false, 00:10:04.765 "copy": false, 00:10:04.765 "nvme_iov_md": false 00:10:04.765 }, 00:10:04.765 "memory_domains": [ 00:10:04.765 { 00:10:04.765 "dma_device_id": "system", 00:10:04.765 "dma_device_type": 1 00:10:04.765 }, 00:10:04.765 { 00:10:04.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.765 "dma_device_type": 2 00:10:04.765 }, 00:10:04.765 { 00:10:04.765 "dma_device_id": "system", 00:10:04.765 "dma_device_type": 1 00:10:04.765 }, 00:10:04.765 { 00:10:04.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.765 "dma_device_type": 2 00:10:04.765 }, 00:10:04.765 { 00:10:04.765 "dma_device_id": "system", 00:10:04.765 "dma_device_type": 1 00:10:04.765 }, 00:10:04.765 { 00:10:04.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.765 "dma_device_type": 2 00:10:04.765 }, 00:10:04.765 { 00:10:04.765 "dma_device_id": "system", 00:10:04.765 "dma_device_type": 1 00:10:04.765 }, 00:10:04.765 { 00:10:04.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.765 "dma_device_type": 2 00:10:04.765 } 00:10:04.765 ], 00:10:04.765 "driver_specific": { 00:10:04.765 "raid": { 00:10:04.765 "uuid": "8d4f347d-3c79-491e-b462-152402d1922e", 00:10:04.765 "strip_size_kb": 64, 00:10:04.765 "state": "online", 00:10:04.765 "raid_level": "raid0", 00:10:04.765 "superblock": false, 00:10:04.765 "num_base_bdevs": 4, 00:10:04.765 "num_base_bdevs_discovered": 4, 00:10:04.765 "num_base_bdevs_operational": 4, 00:10:04.765 "base_bdevs_list": [ 00:10:04.765 { 00:10:04.765 "name": "NewBaseBdev", 00:10:04.765 "uuid": "45fb1d53-e3d2-4fd3-84aa-fbce3ffddc27", 00:10:04.765 "is_configured": true, 00:10:04.765 "data_offset": 0, 00:10:04.765 "data_size": 65536 00:10:04.765 }, 00:10:04.765 { 00:10:04.765 "name": "BaseBdev2", 00:10:04.765 "uuid": "9339498b-693e-4ac6-825a-719160fdf04c", 00:10:04.765 "is_configured": true, 00:10:04.765 "data_offset": 0, 00:10:04.765 "data_size": 65536 00:10:04.765 }, 00:10:04.765 { 00:10:04.765 "name": "BaseBdev3", 00:10:04.765 "uuid": "bd3d8c1d-40e0-44cd-b1ca-b3d338c206ba", 00:10:04.765 "is_configured": true, 00:10:04.765 "data_offset": 0, 00:10:04.765 "data_size": 65536 00:10:04.765 }, 00:10:04.765 { 00:10:04.765 "name": "BaseBdev4", 00:10:04.765 "uuid": "cc4b1f00-9baa-470e-bf82-9823d69f9374", 00:10:04.765 "is_configured": true, 00:10:04.765 "data_offset": 0, 00:10:04.765 "data_size": 65536 00:10:04.765 } 00:10:04.765 ] 00:10:04.765 } 00:10:04.765 } 00:10:04.765 }' 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:04.765 BaseBdev2 00:10:04.765 BaseBdev3 00:10:04.765 BaseBdev4' 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:04.765 10:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.766 10:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.766 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.766 10:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.766 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.766 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.766 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.766 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.766 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:04.766 10:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.766 10:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.024 10:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.024 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:05.024 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:05.024 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:05.024 10:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.024 10:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.024 [2024-11-19 10:21:18.586662] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:05.024 [2024-11-19 10:21:18.586746] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:05.024 [2024-11-19 10:21:18.586849] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:05.024 [2024-11-19 10:21:18.586962] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:05.025 [2024-11-19 10:21:18.587049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:05.025 10:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.025 10:21:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69178 00:10:05.025 10:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69178 ']' 00:10:05.025 10:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69178 00:10:05.025 10:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:05.025 10:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:05.025 10:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69178 00:10:05.025 killing process with pid 69178 00:10:05.025 10:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:05.025 10:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:05.025 10:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69178' 00:10:05.025 10:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69178 00:10:05.025 [2024-11-19 10:21:18.634442] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:05.025 10:21:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69178 00:10:05.283 [2024-11-19 10:21:19.004098] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:06.659 00:10:06.659 real 0m11.129s 00:10:06.659 user 0m17.761s 00:10:06.659 sys 0m1.924s 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.659 ************************************ 00:10:06.659 END TEST raid_state_function_test 00:10:06.659 ************************************ 00:10:06.659 10:21:20 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:06.659 10:21:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:06.659 10:21:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.659 10:21:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:06.659 ************************************ 00:10:06.659 START TEST raid_state_function_test_sb 00:10:06.659 ************************************ 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=69844 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:06.659 Process raid pid: 69844 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69844' 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 69844 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 69844 ']' 00:10:06.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:06.659 10:21:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.659 [2024-11-19 10:21:20.236288] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:10:06.659 [2024-11-19 10:21:20.236408] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.659 [2024-11-19 10:21:20.409520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.918 [2024-11-19 10:21:20.518341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.176 [2024-11-19 10:21:20.715197] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.176 [2024-11-19 10:21:20.715226] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.435 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:07.435 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:07.435 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:07.435 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.435 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.435 [2024-11-19 10:21:21.056262] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:07.435 [2024-11-19 10:21:21.056317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:07.435 [2024-11-19 10:21:21.056327] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:07.435 [2024-11-19 10:21:21.056353] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:07.435 [2024-11-19 10:21:21.056360] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:07.435 [2024-11-19 10:21:21.056368] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:07.435 [2024-11-19 10:21:21.056375] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:07.435 [2024-11-19 10:21:21.056384] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:07.435 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.435 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:07.435 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.435 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.435 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.435 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.435 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.435 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.435 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.435 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.435 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.435 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.435 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.435 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.435 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.435 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.435 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.435 "name": "Existed_Raid", 00:10:07.435 "uuid": "b19c49ec-b317-4ee3-bdfd-04483ac7e2cc", 00:10:07.435 "strip_size_kb": 64, 00:10:07.435 "state": "configuring", 00:10:07.435 "raid_level": "raid0", 00:10:07.435 "superblock": true, 00:10:07.435 "num_base_bdevs": 4, 00:10:07.435 "num_base_bdevs_discovered": 0, 00:10:07.435 "num_base_bdevs_operational": 4, 00:10:07.435 "base_bdevs_list": [ 00:10:07.435 { 00:10:07.435 "name": "BaseBdev1", 00:10:07.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.435 "is_configured": false, 00:10:07.435 "data_offset": 0, 00:10:07.435 "data_size": 0 00:10:07.435 }, 00:10:07.435 { 00:10:07.435 "name": "BaseBdev2", 00:10:07.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.435 "is_configured": false, 00:10:07.435 "data_offset": 0, 00:10:07.435 "data_size": 0 00:10:07.435 }, 00:10:07.435 { 00:10:07.435 "name": "BaseBdev3", 00:10:07.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.435 "is_configured": false, 00:10:07.435 "data_offset": 0, 00:10:07.435 "data_size": 0 00:10:07.435 }, 00:10:07.435 { 00:10:07.435 "name": "BaseBdev4", 00:10:07.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.435 "is_configured": false, 00:10:07.435 "data_offset": 0, 00:10:07.435 "data_size": 0 00:10:07.435 } 00:10:07.435 ] 00:10:07.435 }' 00:10:07.435 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.435 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.693 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:07.694 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.694 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.694 [2024-11-19 10:21:21.455519] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:07.694 [2024-11-19 10:21:21.455631] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:07.694 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.694 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:07.694 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.694 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.694 [2024-11-19 10:21:21.463527] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:07.694 [2024-11-19 10:21:21.463614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:07.694 [2024-11-19 10:21:21.463642] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:07.694 [2024-11-19 10:21:21.463666] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:07.694 [2024-11-19 10:21:21.463684] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:07.694 [2024-11-19 10:21:21.463706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:07.694 [2024-11-19 10:21:21.463724] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:07.694 [2024-11-19 10:21:21.463745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:07.694 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.694 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:07.694 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.694 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.953 [2024-11-19 10:21:21.507897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:07.953 BaseBdev1 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.954 [ 00:10:07.954 { 00:10:07.954 "name": "BaseBdev1", 00:10:07.954 "aliases": [ 00:10:07.954 "8aae1cba-6d07-4ece-8f99-8e8ea1294bfe" 00:10:07.954 ], 00:10:07.954 "product_name": "Malloc disk", 00:10:07.954 "block_size": 512, 00:10:07.954 "num_blocks": 65536, 00:10:07.954 "uuid": "8aae1cba-6d07-4ece-8f99-8e8ea1294bfe", 00:10:07.954 "assigned_rate_limits": { 00:10:07.954 "rw_ios_per_sec": 0, 00:10:07.954 "rw_mbytes_per_sec": 0, 00:10:07.954 "r_mbytes_per_sec": 0, 00:10:07.954 "w_mbytes_per_sec": 0 00:10:07.954 }, 00:10:07.954 "claimed": true, 00:10:07.954 "claim_type": "exclusive_write", 00:10:07.954 "zoned": false, 00:10:07.954 "supported_io_types": { 00:10:07.954 "read": true, 00:10:07.954 "write": true, 00:10:07.954 "unmap": true, 00:10:07.954 "flush": true, 00:10:07.954 "reset": true, 00:10:07.954 "nvme_admin": false, 00:10:07.954 "nvme_io": false, 00:10:07.954 "nvme_io_md": false, 00:10:07.954 "write_zeroes": true, 00:10:07.954 "zcopy": true, 00:10:07.954 "get_zone_info": false, 00:10:07.954 "zone_management": false, 00:10:07.954 "zone_append": false, 00:10:07.954 "compare": false, 00:10:07.954 "compare_and_write": false, 00:10:07.954 "abort": true, 00:10:07.954 "seek_hole": false, 00:10:07.954 "seek_data": false, 00:10:07.954 "copy": true, 00:10:07.954 "nvme_iov_md": false 00:10:07.954 }, 00:10:07.954 "memory_domains": [ 00:10:07.954 { 00:10:07.954 "dma_device_id": "system", 00:10:07.954 "dma_device_type": 1 00:10:07.954 }, 00:10:07.954 { 00:10:07.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.954 "dma_device_type": 2 00:10:07.954 } 00:10:07.954 ], 00:10:07.954 "driver_specific": {} 00:10:07.954 } 00:10:07.954 ] 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.954 "name": "Existed_Raid", 00:10:07.954 "uuid": "c99b27c8-204d-4112-9ec5-27f681ce4ae9", 00:10:07.954 "strip_size_kb": 64, 00:10:07.954 "state": "configuring", 00:10:07.954 "raid_level": "raid0", 00:10:07.954 "superblock": true, 00:10:07.954 "num_base_bdevs": 4, 00:10:07.954 "num_base_bdevs_discovered": 1, 00:10:07.954 "num_base_bdevs_operational": 4, 00:10:07.954 "base_bdevs_list": [ 00:10:07.954 { 00:10:07.954 "name": "BaseBdev1", 00:10:07.954 "uuid": "8aae1cba-6d07-4ece-8f99-8e8ea1294bfe", 00:10:07.954 "is_configured": true, 00:10:07.954 "data_offset": 2048, 00:10:07.954 "data_size": 63488 00:10:07.954 }, 00:10:07.954 { 00:10:07.954 "name": "BaseBdev2", 00:10:07.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.954 "is_configured": false, 00:10:07.954 "data_offset": 0, 00:10:07.954 "data_size": 0 00:10:07.954 }, 00:10:07.954 { 00:10:07.954 "name": "BaseBdev3", 00:10:07.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.954 "is_configured": false, 00:10:07.954 "data_offset": 0, 00:10:07.954 "data_size": 0 00:10:07.954 }, 00:10:07.954 { 00:10:07.954 "name": "BaseBdev4", 00:10:07.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.954 "is_configured": false, 00:10:07.954 "data_offset": 0, 00:10:07.954 "data_size": 0 00:10:07.954 } 00:10:07.954 ] 00:10:07.954 }' 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.954 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.214 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:08.214 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.214 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.214 [2024-11-19 10:21:21.979139] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:08.214 [2024-11-19 10:21:21.979192] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:08.214 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.214 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:08.214 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.214 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.214 [2024-11-19 10:21:21.991182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:08.214 [2024-11-19 10:21:21.992889] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:08.214 [2024-11-19 10:21:21.992934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:08.214 [2024-11-19 10:21:21.992944] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:08.214 [2024-11-19 10:21:21.992954] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:08.214 [2024-11-19 10:21:21.992961] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:08.214 [2024-11-19 10:21:21.992969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:08.474 10:21:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.474 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:08.474 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:08.474 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:08.474 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.474 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.474 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.474 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.474 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.474 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.474 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.474 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.474 10:21:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.474 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.474 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.474 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.474 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.474 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.474 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.474 "name": "Existed_Raid", 00:10:08.474 "uuid": "1bc2f0ca-c669-46d4-9f9a-3bc6b7fe72fe", 00:10:08.474 "strip_size_kb": 64, 00:10:08.474 "state": "configuring", 00:10:08.474 "raid_level": "raid0", 00:10:08.474 "superblock": true, 00:10:08.474 "num_base_bdevs": 4, 00:10:08.474 "num_base_bdevs_discovered": 1, 00:10:08.474 "num_base_bdevs_operational": 4, 00:10:08.474 "base_bdevs_list": [ 00:10:08.474 { 00:10:08.474 "name": "BaseBdev1", 00:10:08.474 "uuid": "8aae1cba-6d07-4ece-8f99-8e8ea1294bfe", 00:10:08.474 "is_configured": true, 00:10:08.474 "data_offset": 2048, 00:10:08.474 "data_size": 63488 00:10:08.474 }, 00:10:08.474 { 00:10:08.474 "name": "BaseBdev2", 00:10:08.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.474 "is_configured": false, 00:10:08.474 "data_offset": 0, 00:10:08.474 "data_size": 0 00:10:08.474 }, 00:10:08.474 { 00:10:08.474 "name": "BaseBdev3", 00:10:08.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.474 "is_configured": false, 00:10:08.474 "data_offset": 0, 00:10:08.474 "data_size": 0 00:10:08.474 }, 00:10:08.474 { 00:10:08.474 "name": "BaseBdev4", 00:10:08.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.474 "is_configured": false, 00:10:08.474 "data_offset": 0, 00:10:08.474 "data_size": 0 00:10:08.474 } 00:10:08.474 ] 00:10:08.474 }' 00:10:08.474 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.474 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.734 [2024-11-19 10:21:22.434203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:08.734 BaseBdev2 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.734 [ 00:10:08.734 { 00:10:08.734 "name": "BaseBdev2", 00:10:08.734 "aliases": [ 00:10:08.734 "a68a56a1-19e6-43c6-8992-2df7b634d970" 00:10:08.734 ], 00:10:08.734 "product_name": "Malloc disk", 00:10:08.734 "block_size": 512, 00:10:08.734 "num_blocks": 65536, 00:10:08.734 "uuid": "a68a56a1-19e6-43c6-8992-2df7b634d970", 00:10:08.734 "assigned_rate_limits": { 00:10:08.734 "rw_ios_per_sec": 0, 00:10:08.734 "rw_mbytes_per_sec": 0, 00:10:08.734 "r_mbytes_per_sec": 0, 00:10:08.734 "w_mbytes_per_sec": 0 00:10:08.734 }, 00:10:08.734 "claimed": true, 00:10:08.734 "claim_type": "exclusive_write", 00:10:08.734 "zoned": false, 00:10:08.734 "supported_io_types": { 00:10:08.734 "read": true, 00:10:08.734 "write": true, 00:10:08.734 "unmap": true, 00:10:08.734 "flush": true, 00:10:08.734 "reset": true, 00:10:08.734 "nvme_admin": false, 00:10:08.734 "nvme_io": false, 00:10:08.734 "nvme_io_md": false, 00:10:08.734 "write_zeroes": true, 00:10:08.734 "zcopy": true, 00:10:08.734 "get_zone_info": false, 00:10:08.734 "zone_management": false, 00:10:08.734 "zone_append": false, 00:10:08.734 "compare": false, 00:10:08.734 "compare_and_write": false, 00:10:08.734 "abort": true, 00:10:08.734 "seek_hole": false, 00:10:08.734 "seek_data": false, 00:10:08.734 "copy": true, 00:10:08.734 "nvme_iov_md": false 00:10:08.734 }, 00:10:08.734 "memory_domains": [ 00:10:08.734 { 00:10:08.734 "dma_device_id": "system", 00:10:08.734 "dma_device_type": 1 00:10:08.734 }, 00:10:08.734 { 00:10:08.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.734 "dma_device_type": 2 00:10:08.734 } 00:10:08.734 ], 00:10:08.734 "driver_specific": {} 00:10:08.734 } 00:10:08.734 ] 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.734 "name": "Existed_Raid", 00:10:08.734 "uuid": "1bc2f0ca-c669-46d4-9f9a-3bc6b7fe72fe", 00:10:08.734 "strip_size_kb": 64, 00:10:08.734 "state": "configuring", 00:10:08.734 "raid_level": "raid0", 00:10:08.734 "superblock": true, 00:10:08.734 "num_base_bdevs": 4, 00:10:08.734 "num_base_bdevs_discovered": 2, 00:10:08.734 "num_base_bdevs_operational": 4, 00:10:08.734 "base_bdevs_list": [ 00:10:08.734 { 00:10:08.734 "name": "BaseBdev1", 00:10:08.734 "uuid": "8aae1cba-6d07-4ece-8f99-8e8ea1294bfe", 00:10:08.734 "is_configured": true, 00:10:08.734 "data_offset": 2048, 00:10:08.734 "data_size": 63488 00:10:08.734 }, 00:10:08.734 { 00:10:08.734 "name": "BaseBdev2", 00:10:08.734 "uuid": "a68a56a1-19e6-43c6-8992-2df7b634d970", 00:10:08.734 "is_configured": true, 00:10:08.734 "data_offset": 2048, 00:10:08.734 "data_size": 63488 00:10:08.734 }, 00:10:08.734 { 00:10:08.734 "name": "BaseBdev3", 00:10:08.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.734 "is_configured": false, 00:10:08.734 "data_offset": 0, 00:10:08.734 "data_size": 0 00:10:08.734 }, 00:10:08.734 { 00:10:08.734 "name": "BaseBdev4", 00:10:08.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.734 "is_configured": false, 00:10:08.734 "data_offset": 0, 00:10:08.734 "data_size": 0 00:10:08.734 } 00:10:08.734 ] 00:10:08.734 }' 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.734 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.303 [2024-11-19 10:21:22.930957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:09.303 BaseBdev3 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.303 [ 00:10:09.303 { 00:10:09.303 "name": "BaseBdev3", 00:10:09.303 "aliases": [ 00:10:09.303 "5034d9c3-725b-4e9f-aed1-c8569c730432" 00:10:09.303 ], 00:10:09.303 "product_name": "Malloc disk", 00:10:09.303 "block_size": 512, 00:10:09.303 "num_blocks": 65536, 00:10:09.303 "uuid": "5034d9c3-725b-4e9f-aed1-c8569c730432", 00:10:09.303 "assigned_rate_limits": { 00:10:09.303 "rw_ios_per_sec": 0, 00:10:09.303 "rw_mbytes_per_sec": 0, 00:10:09.303 "r_mbytes_per_sec": 0, 00:10:09.303 "w_mbytes_per_sec": 0 00:10:09.303 }, 00:10:09.303 "claimed": true, 00:10:09.303 "claim_type": "exclusive_write", 00:10:09.303 "zoned": false, 00:10:09.303 "supported_io_types": { 00:10:09.303 "read": true, 00:10:09.303 "write": true, 00:10:09.303 "unmap": true, 00:10:09.303 "flush": true, 00:10:09.303 "reset": true, 00:10:09.303 "nvme_admin": false, 00:10:09.303 "nvme_io": false, 00:10:09.303 "nvme_io_md": false, 00:10:09.303 "write_zeroes": true, 00:10:09.303 "zcopy": true, 00:10:09.303 "get_zone_info": false, 00:10:09.303 "zone_management": false, 00:10:09.303 "zone_append": false, 00:10:09.303 "compare": false, 00:10:09.303 "compare_and_write": false, 00:10:09.303 "abort": true, 00:10:09.303 "seek_hole": false, 00:10:09.303 "seek_data": false, 00:10:09.303 "copy": true, 00:10:09.303 "nvme_iov_md": false 00:10:09.303 }, 00:10:09.303 "memory_domains": [ 00:10:09.303 { 00:10:09.303 "dma_device_id": "system", 00:10:09.303 "dma_device_type": 1 00:10:09.303 }, 00:10:09.303 { 00:10:09.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.303 "dma_device_type": 2 00:10:09.303 } 00:10:09.303 ], 00:10:09.303 "driver_specific": {} 00:10:09.303 } 00:10:09.303 ] 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.303 10:21:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.304 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.304 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.304 10:21:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.304 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.304 "name": "Existed_Raid", 00:10:09.304 "uuid": "1bc2f0ca-c669-46d4-9f9a-3bc6b7fe72fe", 00:10:09.304 "strip_size_kb": 64, 00:10:09.304 "state": "configuring", 00:10:09.304 "raid_level": "raid0", 00:10:09.304 "superblock": true, 00:10:09.304 "num_base_bdevs": 4, 00:10:09.304 "num_base_bdevs_discovered": 3, 00:10:09.304 "num_base_bdevs_operational": 4, 00:10:09.304 "base_bdevs_list": [ 00:10:09.304 { 00:10:09.304 "name": "BaseBdev1", 00:10:09.304 "uuid": "8aae1cba-6d07-4ece-8f99-8e8ea1294bfe", 00:10:09.304 "is_configured": true, 00:10:09.304 "data_offset": 2048, 00:10:09.304 "data_size": 63488 00:10:09.304 }, 00:10:09.304 { 00:10:09.304 "name": "BaseBdev2", 00:10:09.304 "uuid": "a68a56a1-19e6-43c6-8992-2df7b634d970", 00:10:09.304 "is_configured": true, 00:10:09.304 "data_offset": 2048, 00:10:09.304 "data_size": 63488 00:10:09.304 }, 00:10:09.304 { 00:10:09.304 "name": "BaseBdev3", 00:10:09.304 "uuid": "5034d9c3-725b-4e9f-aed1-c8569c730432", 00:10:09.304 "is_configured": true, 00:10:09.304 "data_offset": 2048, 00:10:09.304 "data_size": 63488 00:10:09.304 }, 00:10:09.304 { 00:10:09.304 "name": "BaseBdev4", 00:10:09.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.304 "is_configured": false, 00:10:09.304 "data_offset": 0, 00:10:09.304 "data_size": 0 00:10:09.304 } 00:10:09.304 ] 00:10:09.304 }' 00:10:09.304 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.304 10:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.873 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:09.873 10:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.873 10:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.873 [2024-11-19 10:21:23.462308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:09.873 [2024-11-19 10:21:23.462551] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:09.873 [2024-11-19 10:21:23.462565] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:09.873 [2024-11-19 10:21:23.462808] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:09.873 [2024-11-19 10:21:23.462968] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:09.873 [2024-11-19 10:21:23.462980] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:09.873 BaseBdev4 00:10:09.873 [2024-11-19 10:21:23.463141] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:09.873 10:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.873 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:09.873 10:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:09.873 10:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:09.873 10:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:09.873 10:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:09.873 10:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:09.873 10:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:09.873 10:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.873 10:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.873 10:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.873 10:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:09.873 10:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.873 10:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.873 [ 00:10:09.873 { 00:10:09.873 "name": "BaseBdev4", 00:10:09.873 "aliases": [ 00:10:09.873 "2d4d9db6-717a-4479-a34a-87602af97d7b" 00:10:09.873 ], 00:10:09.873 "product_name": "Malloc disk", 00:10:09.873 "block_size": 512, 00:10:09.873 "num_blocks": 65536, 00:10:09.873 "uuid": "2d4d9db6-717a-4479-a34a-87602af97d7b", 00:10:09.873 "assigned_rate_limits": { 00:10:09.873 "rw_ios_per_sec": 0, 00:10:09.873 "rw_mbytes_per_sec": 0, 00:10:09.873 "r_mbytes_per_sec": 0, 00:10:09.873 "w_mbytes_per_sec": 0 00:10:09.873 }, 00:10:09.873 "claimed": true, 00:10:09.873 "claim_type": "exclusive_write", 00:10:09.873 "zoned": false, 00:10:09.873 "supported_io_types": { 00:10:09.873 "read": true, 00:10:09.873 "write": true, 00:10:09.873 "unmap": true, 00:10:09.873 "flush": true, 00:10:09.873 "reset": true, 00:10:09.873 "nvme_admin": false, 00:10:09.873 "nvme_io": false, 00:10:09.873 "nvme_io_md": false, 00:10:09.873 "write_zeroes": true, 00:10:09.873 "zcopy": true, 00:10:09.873 "get_zone_info": false, 00:10:09.873 "zone_management": false, 00:10:09.873 "zone_append": false, 00:10:09.873 "compare": false, 00:10:09.873 "compare_and_write": false, 00:10:09.873 "abort": true, 00:10:09.873 "seek_hole": false, 00:10:09.873 "seek_data": false, 00:10:09.873 "copy": true, 00:10:09.873 "nvme_iov_md": false 00:10:09.873 }, 00:10:09.873 "memory_domains": [ 00:10:09.873 { 00:10:09.873 "dma_device_id": "system", 00:10:09.873 "dma_device_type": 1 00:10:09.873 }, 00:10:09.873 { 00:10:09.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.874 "dma_device_type": 2 00:10:09.874 } 00:10:09.874 ], 00:10:09.874 "driver_specific": {} 00:10:09.874 } 00:10:09.874 ] 00:10:09.874 10:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.874 10:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:09.874 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:09.874 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:09.874 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:09.874 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.874 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.874 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.874 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.874 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.874 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.874 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.874 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.874 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.874 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.874 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.874 10:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.874 10:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.874 10:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.874 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.874 "name": "Existed_Raid", 00:10:09.874 "uuid": "1bc2f0ca-c669-46d4-9f9a-3bc6b7fe72fe", 00:10:09.874 "strip_size_kb": 64, 00:10:09.874 "state": "online", 00:10:09.874 "raid_level": "raid0", 00:10:09.874 "superblock": true, 00:10:09.874 "num_base_bdevs": 4, 00:10:09.874 "num_base_bdevs_discovered": 4, 00:10:09.874 "num_base_bdevs_operational": 4, 00:10:09.874 "base_bdevs_list": [ 00:10:09.874 { 00:10:09.874 "name": "BaseBdev1", 00:10:09.874 "uuid": "8aae1cba-6d07-4ece-8f99-8e8ea1294bfe", 00:10:09.874 "is_configured": true, 00:10:09.874 "data_offset": 2048, 00:10:09.874 "data_size": 63488 00:10:09.874 }, 00:10:09.874 { 00:10:09.874 "name": "BaseBdev2", 00:10:09.874 "uuid": "a68a56a1-19e6-43c6-8992-2df7b634d970", 00:10:09.874 "is_configured": true, 00:10:09.874 "data_offset": 2048, 00:10:09.874 "data_size": 63488 00:10:09.874 }, 00:10:09.874 { 00:10:09.874 "name": "BaseBdev3", 00:10:09.874 "uuid": "5034d9c3-725b-4e9f-aed1-c8569c730432", 00:10:09.874 "is_configured": true, 00:10:09.874 "data_offset": 2048, 00:10:09.874 "data_size": 63488 00:10:09.874 }, 00:10:09.874 { 00:10:09.874 "name": "BaseBdev4", 00:10:09.874 "uuid": "2d4d9db6-717a-4479-a34a-87602af97d7b", 00:10:09.874 "is_configured": true, 00:10:09.874 "data_offset": 2048, 00:10:09.874 "data_size": 63488 00:10:09.874 } 00:10:09.874 ] 00:10:09.874 }' 00:10:09.874 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.874 10:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.443 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:10.443 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:10.443 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:10.443 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:10.443 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:10.443 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:10.443 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:10.443 10:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.443 10:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.443 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:10.443 [2024-11-19 10:21:23.941872] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.443 10:21:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.443 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:10.443 "name": "Existed_Raid", 00:10:10.443 "aliases": [ 00:10:10.443 "1bc2f0ca-c669-46d4-9f9a-3bc6b7fe72fe" 00:10:10.443 ], 00:10:10.443 "product_name": "Raid Volume", 00:10:10.443 "block_size": 512, 00:10:10.443 "num_blocks": 253952, 00:10:10.443 "uuid": "1bc2f0ca-c669-46d4-9f9a-3bc6b7fe72fe", 00:10:10.443 "assigned_rate_limits": { 00:10:10.443 "rw_ios_per_sec": 0, 00:10:10.443 "rw_mbytes_per_sec": 0, 00:10:10.443 "r_mbytes_per_sec": 0, 00:10:10.443 "w_mbytes_per_sec": 0 00:10:10.443 }, 00:10:10.443 "claimed": false, 00:10:10.443 "zoned": false, 00:10:10.443 "supported_io_types": { 00:10:10.443 "read": true, 00:10:10.443 "write": true, 00:10:10.443 "unmap": true, 00:10:10.443 "flush": true, 00:10:10.443 "reset": true, 00:10:10.443 "nvme_admin": false, 00:10:10.443 "nvme_io": false, 00:10:10.443 "nvme_io_md": false, 00:10:10.443 "write_zeroes": true, 00:10:10.443 "zcopy": false, 00:10:10.443 "get_zone_info": false, 00:10:10.443 "zone_management": false, 00:10:10.443 "zone_append": false, 00:10:10.443 "compare": false, 00:10:10.443 "compare_and_write": false, 00:10:10.443 "abort": false, 00:10:10.443 "seek_hole": false, 00:10:10.443 "seek_data": false, 00:10:10.443 "copy": false, 00:10:10.443 "nvme_iov_md": false 00:10:10.443 }, 00:10:10.443 "memory_domains": [ 00:10:10.443 { 00:10:10.443 "dma_device_id": "system", 00:10:10.443 "dma_device_type": 1 00:10:10.443 }, 00:10:10.443 { 00:10:10.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.443 "dma_device_type": 2 00:10:10.443 }, 00:10:10.443 { 00:10:10.443 "dma_device_id": "system", 00:10:10.443 "dma_device_type": 1 00:10:10.443 }, 00:10:10.443 { 00:10:10.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.443 "dma_device_type": 2 00:10:10.443 }, 00:10:10.443 { 00:10:10.443 "dma_device_id": "system", 00:10:10.443 "dma_device_type": 1 00:10:10.443 }, 00:10:10.443 { 00:10:10.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.443 "dma_device_type": 2 00:10:10.443 }, 00:10:10.443 { 00:10:10.443 "dma_device_id": "system", 00:10:10.443 "dma_device_type": 1 00:10:10.443 }, 00:10:10.443 { 00:10:10.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.443 "dma_device_type": 2 00:10:10.443 } 00:10:10.443 ], 00:10:10.443 "driver_specific": { 00:10:10.443 "raid": { 00:10:10.443 "uuid": "1bc2f0ca-c669-46d4-9f9a-3bc6b7fe72fe", 00:10:10.443 "strip_size_kb": 64, 00:10:10.443 "state": "online", 00:10:10.443 "raid_level": "raid0", 00:10:10.443 "superblock": true, 00:10:10.443 "num_base_bdevs": 4, 00:10:10.443 "num_base_bdevs_discovered": 4, 00:10:10.443 "num_base_bdevs_operational": 4, 00:10:10.443 "base_bdevs_list": [ 00:10:10.443 { 00:10:10.443 "name": "BaseBdev1", 00:10:10.443 "uuid": "8aae1cba-6d07-4ece-8f99-8e8ea1294bfe", 00:10:10.443 "is_configured": true, 00:10:10.443 "data_offset": 2048, 00:10:10.443 "data_size": 63488 00:10:10.443 }, 00:10:10.443 { 00:10:10.443 "name": "BaseBdev2", 00:10:10.443 "uuid": "a68a56a1-19e6-43c6-8992-2df7b634d970", 00:10:10.443 "is_configured": true, 00:10:10.443 "data_offset": 2048, 00:10:10.443 "data_size": 63488 00:10:10.443 }, 00:10:10.443 { 00:10:10.443 "name": "BaseBdev3", 00:10:10.443 "uuid": "5034d9c3-725b-4e9f-aed1-c8569c730432", 00:10:10.443 "is_configured": true, 00:10:10.443 "data_offset": 2048, 00:10:10.443 "data_size": 63488 00:10:10.443 }, 00:10:10.443 { 00:10:10.443 "name": "BaseBdev4", 00:10:10.443 "uuid": "2d4d9db6-717a-4479-a34a-87602af97d7b", 00:10:10.443 "is_configured": true, 00:10:10.443 "data_offset": 2048, 00:10:10.443 "data_size": 63488 00:10:10.443 } 00:10:10.443 ] 00:10:10.443 } 00:10:10.443 } 00:10:10.443 }' 00:10:10.443 10:21:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:10.443 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:10.443 BaseBdev2 00:10:10.443 BaseBdev3 00:10:10.443 BaseBdev4' 00:10:10.443 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.443 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:10.443 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.443 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.443 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:10.443 10:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.443 10:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.443 10:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.443 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.443 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.443 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.443 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:10.443 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.443 10:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.443 10:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.443 10:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.443 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.443 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.443 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.443 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:10.443 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.443 10:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.443 10:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.443 10:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.703 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.703 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.703 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.703 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.703 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:10.703 10:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.704 10:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.704 10:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.704 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.704 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.704 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:10.704 10:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.704 10:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.704 [2024-11-19 10:21:24.292968] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:10.704 [2024-11-19 10:21:24.293057] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:10.704 [2024-11-19 10:21:24.293149] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.704 10:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.704 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:10.704 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:10.704 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:10.704 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:10.704 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:10.704 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:10.704 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.704 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:10.704 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.704 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.704 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.704 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.704 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.704 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.704 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.704 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.704 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.704 10:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.704 10:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.704 10:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.704 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.704 "name": "Existed_Raid", 00:10:10.704 "uuid": "1bc2f0ca-c669-46d4-9f9a-3bc6b7fe72fe", 00:10:10.704 "strip_size_kb": 64, 00:10:10.704 "state": "offline", 00:10:10.704 "raid_level": "raid0", 00:10:10.704 "superblock": true, 00:10:10.704 "num_base_bdevs": 4, 00:10:10.704 "num_base_bdevs_discovered": 3, 00:10:10.704 "num_base_bdevs_operational": 3, 00:10:10.704 "base_bdevs_list": [ 00:10:10.704 { 00:10:10.704 "name": null, 00:10:10.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.704 "is_configured": false, 00:10:10.704 "data_offset": 0, 00:10:10.704 "data_size": 63488 00:10:10.704 }, 00:10:10.704 { 00:10:10.704 "name": "BaseBdev2", 00:10:10.704 "uuid": "a68a56a1-19e6-43c6-8992-2df7b634d970", 00:10:10.704 "is_configured": true, 00:10:10.704 "data_offset": 2048, 00:10:10.704 "data_size": 63488 00:10:10.704 }, 00:10:10.704 { 00:10:10.704 "name": "BaseBdev3", 00:10:10.704 "uuid": "5034d9c3-725b-4e9f-aed1-c8569c730432", 00:10:10.704 "is_configured": true, 00:10:10.704 "data_offset": 2048, 00:10:10.704 "data_size": 63488 00:10:10.704 }, 00:10:10.704 { 00:10:10.704 "name": "BaseBdev4", 00:10:10.704 "uuid": "2d4d9db6-717a-4479-a34a-87602af97d7b", 00:10:10.704 "is_configured": true, 00:10:10.704 "data_offset": 2048, 00:10:10.704 "data_size": 63488 00:10:10.704 } 00:10:10.704 ] 00:10:10.704 }' 00:10:10.704 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.704 10:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.273 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:11.273 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:11.273 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.273 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:11.273 10:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.273 10:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.273 10:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.273 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:11.273 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:11.273 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:11.273 10:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.273 10:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.273 [2024-11-19 10:21:24.877036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:11.273 10:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.273 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:11.273 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:11.273 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.273 10:21:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:11.273 10:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.273 10:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.273 10:21:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.273 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:11.273 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:11.274 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:11.274 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.274 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.274 [2024-11-19 10:21:25.029557] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:11.535 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.535 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:11.535 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:11.535 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.535 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.535 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.535 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:11.535 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.535 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:11.535 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:11.535 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:11.535 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.535 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.535 [2024-11-19 10:21:25.180296] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:11.535 [2024-11-19 10:21:25.180347] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:11.535 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.535 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:11.535 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:11.535 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.535 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.535 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.535 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:11.535 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.796 BaseBdev2 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.796 [ 00:10:11.796 { 00:10:11.796 "name": "BaseBdev2", 00:10:11.796 "aliases": [ 00:10:11.796 "f74a7438-068e-44f9-a12a-9d7018a51383" 00:10:11.796 ], 00:10:11.796 "product_name": "Malloc disk", 00:10:11.796 "block_size": 512, 00:10:11.796 "num_blocks": 65536, 00:10:11.796 "uuid": "f74a7438-068e-44f9-a12a-9d7018a51383", 00:10:11.796 "assigned_rate_limits": { 00:10:11.796 "rw_ios_per_sec": 0, 00:10:11.796 "rw_mbytes_per_sec": 0, 00:10:11.796 "r_mbytes_per_sec": 0, 00:10:11.796 "w_mbytes_per_sec": 0 00:10:11.796 }, 00:10:11.796 "claimed": false, 00:10:11.796 "zoned": false, 00:10:11.796 "supported_io_types": { 00:10:11.796 "read": true, 00:10:11.796 "write": true, 00:10:11.796 "unmap": true, 00:10:11.796 "flush": true, 00:10:11.796 "reset": true, 00:10:11.796 "nvme_admin": false, 00:10:11.796 "nvme_io": false, 00:10:11.796 "nvme_io_md": false, 00:10:11.796 "write_zeroes": true, 00:10:11.796 "zcopy": true, 00:10:11.796 "get_zone_info": false, 00:10:11.796 "zone_management": false, 00:10:11.796 "zone_append": false, 00:10:11.796 "compare": false, 00:10:11.796 "compare_and_write": false, 00:10:11.796 "abort": true, 00:10:11.796 "seek_hole": false, 00:10:11.796 "seek_data": false, 00:10:11.796 "copy": true, 00:10:11.796 "nvme_iov_md": false 00:10:11.796 }, 00:10:11.796 "memory_domains": [ 00:10:11.796 { 00:10:11.796 "dma_device_id": "system", 00:10:11.796 "dma_device_type": 1 00:10:11.796 }, 00:10:11.796 { 00:10:11.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.796 "dma_device_type": 2 00:10:11.796 } 00:10:11.796 ], 00:10:11.796 "driver_specific": {} 00:10:11.796 } 00:10:11.796 ] 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.796 BaseBdev3 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.796 [ 00:10:11.796 { 00:10:11.796 "name": "BaseBdev3", 00:10:11.796 "aliases": [ 00:10:11.796 "ff9ace36-59ce-44cb-893b-284c022b7a67" 00:10:11.796 ], 00:10:11.796 "product_name": "Malloc disk", 00:10:11.796 "block_size": 512, 00:10:11.796 "num_blocks": 65536, 00:10:11.796 "uuid": "ff9ace36-59ce-44cb-893b-284c022b7a67", 00:10:11.796 "assigned_rate_limits": { 00:10:11.796 "rw_ios_per_sec": 0, 00:10:11.796 "rw_mbytes_per_sec": 0, 00:10:11.796 "r_mbytes_per_sec": 0, 00:10:11.796 "w_mbytes_per_sec": 0 00:10:11.796 }, 00:10:11.796 "claimed": false, 00:10:11.796 "zoned": false, 00:10:11.796 "supported_io_types": { 00:10:11.796 "read": true, 00:10:11.796 "write": true, 00:10:11.796 "unmap": true, 00:10:11.796 "flush": true, 00:10:11.796 "reset": true, 00:10:11.796 "nvme_admin": false, 00:10:11.796 "nvme_io": false, 00:10:11.796 "nvme_io_md": false, 00:10:11.796 "write_zeroes": true, 00:10:11.796 "zcopy": true, 00:10:11.796 "get_zone_info": false, 00:10:11.796 "zone_management": false, 00:10:11.796 "zone_append": false, 00:10:11.796 "compare": false, 00:10:11.796 "compare_and_write": false, 00:10:11.796 "abort": true, 00:10:11.796 "seek_hole": false, 00:10:11.796 "seek_data": false, 00:10:11.796 "copy": true, 00:10:11.796 "nvme_iov_md": false 00:10:11.796 }, 00:10:11.796 "memory_domains": [ 00:10:11.796 { 00:10:11.796 "dma_device_id": "system", 00:10:11.796 "dma_device_type": 1 00:10:11.796 }, 00:10:11.796 { 00:10:11.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.796 "dma_device_type": 2 00:10:11.796 } 00:10:11.796 ], 00:10:11.796 "driver_specific": {} 00:10:11.796 } 00:10:11.796 ] 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.796 BaseBdev4 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.796 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:11.797 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.797 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.797 [ 00:10:11.797 { 00:10:11.797 "name": "BaseBdev4", 00:10:11.797 "aliases": [ 00:10:11.797 "015b45d9-b4f8-43a9-959c-c1f36a2efca7" 00:10:11.797 ], 00:10:11.797 "product_name": "Malloc disk", 00:10:11.797 "block_size": 512, 00:10:11.797 "num_blocks": 65536, 00:10:11.797 "uuid": "015b45d9-b4f8-43a9-959c-c1f36a2efca7", 00:10:11.797 "assigned_rate_limits": { 00:10:11.797 "rw_ios_per_sec": 0, 00:10:11.797 "rw_mbytes_per_sec": 0, 00:10:11.797 "r_mbytes_per_sec": 0, 00:10:11.797 "w_mbytes_per_sec": 0 00:10:11.797 }, 00:10:11.797 "claimed": false, 00:10:11.797 "zoned": false, 00:10:11.797 "supported_io_types": { 00:10:11.797 "read": true, 00:10:11.797 "write": true, 00:10:11.797 "unmap": true, 00:10:11.797 "flush": true, 00:10:11.797 "reset": true, 00:10:11.797 "nvme_admin": false, 00:10:11.797 "nvme_io": false, 00:10:11.797 "nvme_io_md": false, 00:10:11.797 "write_zeroes": true, 00:10:11.797 "zcopy": true, 00:10:11.797 "get_zone_info": false, 00:10:11.797 "zone_management": false, 00:10:11.797 "zone_append": false, 00:10:11.797 "compare": false, 00:10:11.797 "compare_and_write": false, 00:10:11.797 "abort": true, 00:10:11.797 "seek_hole": false, 00:10:11.797 "seek_data": false, 00:10:11.797 "copy": true, 00:10:11.797 "nvme_iov_md": false 00:10:11.797 }, 00:10:11.797 "memory_domains": [ 00:10:11.797 { 00:10:11.797 "dma_device_id": "system", 00:10:11.797 "dma_device_type": 1 00:10:11.797 }, 00:10:11.797 { 00:10:11.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.797 "dma_device_type": 2 00:10:11.797 } 00:10:11.797 ], 00:10:11.797 "driver_specific": {} 00:10:11.797 } 00:10:11.797 ] 00:10:11.797 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.797 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:11.797 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:11.797 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:11.797 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:11.797 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.797 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.797 [2024-11-19 10:21:25.570336] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:11.797 [2024-11-19 10:21:25.570449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:11.797 [2024-11-19 10:21:25.570490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:11.797 [2024-11-19 10:21:25.572240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:11.797 [2024-11-19 10:21:25.572330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:12.057 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.057 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:12.057 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.057 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.057 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.057 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.057 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.057 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.057 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.057 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.057 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.057 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.057 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.057 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.057 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.057 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.057 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.057 "name": "Existed_Raid", 00:10:12.057 "uuid": "afa4163e-38f4-44bc-8060-d96eac29d627", 00:10:12.057 "strip_size_kb": 64, 00:10:12.057 "state": "configuring", 00:10:12.057 "raid_level": "raid0", 00:10:12.057 "superblock": true, 00:10:12.057 "num_base_bdevs": 4, 00:10:12.057 "num_base_bdevs_discovered": 3, 00:10:12.057 "num_base_bdevs_operational": 4, 00:10:12.057 "base_bdevs_list": [ 00:10:12.057 { 00:10:12.057 "name": "BaseBdev1", 00:10:12.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.057 "is_configured": false, 00:10:12.057 "data_offset": 0, 00:10:12.057 "data_size": 0 00:10:12.057 }, 00:10:12.057 { 00:10:12.057 "name": "BaseBdev2", 00:10:12.057 "uuid": "f74a7438-068e-44f9-a12a-9d7018a51383", 00:10:12.057 "is_configured": true, 00:10:12.057 "data_offset": 2048, 00:10:12.057 "data_size": 63488 00:10:12.057 }, 00:10:12.057 { 00:10:12.057 "name": "BaseBdev3", 00:10:12.057 "uuid": "ff9ace36-59ce-44cb-893b-284c022b7a67", 00:10:12.057 "is_configured": true, 00:10:12.057 "data_offset": 2048, 00:10:12.057 "data_size": 63488 00:10:12.057 }, 00:10:12.057 { 00:10:12.057 "name": "BaseBdev4", 00:10:12.057 "uuid": "015b45d9-b4f8-43a9-959c-c1f36a2efca7", 00:10:12.057 "is_configured": true, 00:10:12.057 "data_offset": 2048, 00:10:12.057 "data_size": 63488 00:10:12.057 } 00:10:12.057 ] 00:10:12.057 }' 00:10:12.057 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.057 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.317 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:12.317 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.317 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.317 [2024-11-19 10:21:25.957699] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:12.317 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.317 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:12.317 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.317 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.317 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.317 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.317 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.317 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.317 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.317 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.317 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.317 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.317 10:21:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.317 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.317 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.317 10:21:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.317 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.317 "name": "Existed_Raid", 00:10:12.317 "uuid": "afa4163e-38f4-44bc-8060-d96eac29d627", 00:10:12.317 "strip_size_kb": 64, 00:10:12.317 "state": "configuring", 00:10:12.317 "raid_level": "raid0", 00:10:12.317 "superblock": true, 00:10:12.317 "num_base_bdevs": 4, 00:10:12.317 "num_base_bdevs_discovered": 2, 00:10:12.317 "num_base_bdevs_operational": 4, 00:10:12.317 "base_bdevs_list": [ 00:10:12.317 { 00:10:12.317 "name": "BaseBdev1", 00:10:12.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.317 "is_configured": false, 00:10:12.317 "data_offset": 0, 00:10:12.317 "data_size": 0 00:10:12.317 }, 00:10:12.317 { 00:10:12.317 "name": null, 00:10:12.317 "uuid": "f74a7438-068e-44f9-a12a-9d7018a51383", 00:10:12.317 "is_configured": false, 00:10:12.317 "data_offset": 0, 00:10:12.317 "data_size": 63488 00:10:12.317 }, 00:10:12.317 { 00:10:12.317 "name": "BaseBdev3", 00:10:12.317 "uuid": "ff9ace36-59ce-44cb-893b-284c022b7a67", 00:10:12.317 "is_configured": true, 00:10:12.317 "data_offset": 2048, 00:10:12.317 "data_size": 63488 00:10:12.317 }, 00:10:12.317 { 00:10:12.317 "name": "BaseBdev4", 00:10:12.317 "uuid": "015b45d9-b4f8-43a9-959c-c1f36a2efca7", 00:10:12.317 "is_configured": true, 00:10:12.317 "data_offset": 2048, 00:10:12.317 "data_size": 63488 00:10:12.317 } 00:10:12.317 ] 00:10:12.317 }' 00:10:12.317 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.317 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.885 [2024-11-19 10:21:26.508911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:12.885 BaseBdev1 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.885 [ 00:10:12.885 { 00:10:12.885 "name": "BaseBdev1", 00:10:12.885 "aliases": [ 00:10:12.885 "aefbb0b9-6099-4a20-9b6d-a7b2119a43b1" 00:10:12.885 ], 00:10:12.885 "product_name": "Malloc disk", 00:10:12.885 "block_size": 512, 00:10:12.885 "num_blocks": 65536, 00:10:12.885 "uuid": "aefbb0b9-6099-4a20-9b6d-a7b2119a43b1", 00:10:12.885 "assigned_rate_limits": { 00:10:12.885 "rw_ios_per_sec": 0, 00:10:12.885 "rw_mbytes_per_sec": 0, 00:10:12.885 "r_mbytes_per_sec": 0, 00:10:12.885 "w_mbytes_per_sec": 0 00:10:12.885 }, 00:10:12.885 "claimed": true, 00:10:12.885 "claim_type": "exclusive_write", 00:10:12.885 "zoned": false, 00:10:12.885 "supported_io_types": { 00:10:12.885 "read": true, 00:10:12.885 "write": true, 00:10:12.885 "unmap": true, 00:10:12.885 "flush": true, 00:10:12.885 "reset": true, 00:10:12.885 "nvme_admin": false, 00:10:12.885 "nvme_io": false, 00:10:12.885 "nvme_io_md": false, 00:10:12.885 "write_zeroes": true, 00:10:12.885 "zcopy": true, 00:10:12.885 "get_zone_info": false, 00:10:12.885 "zone_management": false, 00:10:12.885 "zone_append": false, 00:10:12.885 "compare": false, 00:10:12.885 "compare_and_write": false, 00:10:12.885 "abort": true, 00:10:12.885 "seek_hole": false, 00:10:12.885 "seek_data": false, 00:10:12.885 "copy": true, 00:10:12.885 "nvme_iov_md": false 00:10:12.885 }, 00:10:12.885 "memory_domains": [ 00:10:12.885 { 00:10:12.885 "dma_device_id": "system", 00:10:12.885 "dma_device_type": 1 00:10:12.885 }, 00:10:12.885 { 00:10:12.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.885 "dma_device_type": 2 00:10:12.885 } 00:10:12.885 ], 00:10:12.885 "driver_specific": {} 00:10:12.885 } 00:10:12.885 ] 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.885 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.885 "name": "Existed_Raid", 00:10:12.885 "uuid": "afa4163e-38f4-44bc-8060-d96eac29d627", 00:10:12.885 "strip_size_kb": 64, 00:10:12.885 "state": "configuring", 00:10:12.885 "raid_level": "raid0", 00:10:12.885 "superblock": true, 00:10:12.885 "num_base_bdevs": 4, 00:10:12.885 "num_base_bdevs_discovered": 3, 00:10:12.885 "num_base_bdevs_operational": 4, 00:10:12.885 "base_bdevs_list": [ 00:10:12.885 { 00:10:12.885 "name": "BaseBdev1", 00:10:12.885 "uuid": "aefbb0b9-6099-4a20-9b6d-a7b2119a43b1", 00:10:12.885 "is_configured": true, 00:10:12.885 "data_offset": 2048, 00:10:12.885 "data_size": 63488 00:10:12.885 }, 00:10:12.885 { 00:10:12.885 "name": null, 00:10:12.885 "uuid": "f74a7438-068e-44f9-a12a-9d7018a51383", 00:10:12.885 "is_configured": false, 00:10:12.885 "data_offset": 0, 00:10:12.885 "data_size": 63488 00:10:12.885 }, 00:10:12.885 { 00:10:12.885 "name": "BaseBdev3", 00:10:12.885 "uuid": "ff9ace36-59ce-44cb-893b-284c022b7a67", 00:10:12.885 "is_configured": true, 00:10:12.885 "data_offset": 2048, 00:10:12.885 "data_size": 63488 00:10:12.885 }, 00:10:12.885 { 00:10:12.885 "name": "BaseBdev4", 00:10:12.886 "uuid": "015b45d9-b4f8-43a9-959c-c1f36a2efca7", 00:10:12.886 "is_configured": true, 00:10:12.886 "data_offset": 2048, 00:10:12.886 "data_size": 63488 00:10:12.886 } 00:10:12.886 ] 00:10:12.886 }' 00:10:12.886 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.886 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.454 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.454 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.454 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.454 10:21:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:13.454 10:21:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.454 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:13.454 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:13.454 10:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.454 10:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.454 [2024-11-19 10:21:27.008123] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:13.454 10:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.454 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:13.454 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.454 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.454 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.454 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.454 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.454 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.454 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.454 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.454 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.454 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.454 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.454 10:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.454 10:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.454 10:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.454 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.454 "name": "Existed_Raid", 00:10:13.454 "uuid": "afa4163e-38f4-44bc-8060-d96eac29d627", 00:10:13.454 "strip_size_kb": 64, 00:10:13.454 "state": "configuring", 00:10:13.454 "raid_level": "raid0", 00:10:13.454 "superblock": true, 00:10:13.454 "num_base_bdevs": 4, 00:10:13.454 "num_base_bdevs_discovered": 2, 00:10:13.454 "num_base_bdevs_operational": 4, 00:10:13.454 "base_bdevs_list": [ 00:10:13.454 { 00:10:13.454 "name": "BaseBdev1", 00:10:13.454 "uuid": "aefbb0b9-6099-4a20-9b6d-a7b2119a43b1", 00:10:13.454 "is_configured": true, 00:10:13.454 "data_offset": 2048, 00:10:13.454 "data_size": 63488 00:10:13.454 }, 00:10:13.454 { 00:10:13.454 "name": null, 00:10:13.454 "uuid": "f74a7438-068e-44f9-a12a-9d7018a51383", 00:10:13.454 "is_configured": false, 00:10:13.454 "data_offset": 0, 00:10:13.454 "data_size": 63488 00:10:13.454 }, 00:10:13.454 { 00:10:13.454 "name": null, 00:10:13.454 "uuid": "ff9ace36-59ce-44cb-893b-284c022b7a67", 00:10:13.454 "is_configured": false, 00:10:13.454 "data_offset": 0, 00:10:13.454 "data_size": 63488 00:10:13.454 }, 00:10:13.454 { 00:10:13.454 "name": "BaseBdev4", 00:10:13.454 "uuid": "015b45d9-b4f8-43a9-959c-c1f36a2efca7", 00:10:13.454 "is_configured": true, 00:10:13.454 "data_offset": 2048, 00:10:13.454 "data_size": 63488 00:10:13.454 } 00:10:13.454 ] 00:10:13.454 }' 00:10:13.454 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.454 10:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.714 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.714 10:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.714 10:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.714 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:13.714 10:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.974 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:13.974 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:13.974 10:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.974 10:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.974 [2024-11-19 10:21:27.499274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:13.974 10:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.974 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:13.974 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.974 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.974 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.974 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.974 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:13.974 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.974 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.974 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.974 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.974 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.974 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.974 10:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.974 10:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.974 10:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.974 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.974 "name": "Existed_Raid", 00:10:13.974 "uuid": "afa4163e-38f4-44bc-8060-d96eac29d627", 00:10:13.974 "strip_size_kb": 64, 00:10:13.974 "state": "configuring", 00:10:13.974 "raid_level": "raid0", 00:10:13.974 "superblock": true, 00:10:13.974 "num_base_bdevs": 4, 00:10:13.974 "num_base_bdevs_discovered": 3, 00:10:13.974 "num_base_bdevs_operational": 4, 00:10:13.974 "base_bdevs_list": [ 00:10:13.974 { 00:10:13.974 "name": "BaseBdev1", 00:10:13.974 "uuid": "aefbb0b9-6099-4a20-9b6d-a7b2119a43b1", 00:10:13.974 "is_configured": true, 00:10:13.974 "data_offset": 2048, 00:10:13.974 "data_size": 63488 00:10:13.974 }, 00:10:13.974 { 00:10:13.974 "name": null, 00:10:13.974 "uuid": "f74a7438-068e-44f9-a12a-9d7018a51383", 00:10:13.974 "is_configured": false, 00:10:13.974 "data_offset": 0, 00:10:13.974 "data_size": 63488 00:10:13.974 }, 00:10:13.974 { 00:10:13.974 "name": "BaseBdev3", 00:10:13.974 "uuid": "ff9ace36-59ce-44cb-893b-284c022b7a67", 00:10:13.974 "is_configured": true, 00:10:13.974 "data_offset": 2048, 00:10:13.974 "data_size": 63488 00:10:13.974 }, 00:10:13.974 { 00:10:13.974 "name": "BaseBdev4", 00:10:13.974 "uuid": "015b45d9-b4f8-43a9-959c-c1f36a2efca7", 00:10:13.974 "is_configured": true, 00:10:13.974 "data_offset": 2048, 00:10:13.974 "data_size": 63488 00:10:13.974 } 00:10:13.974 ] 00:10:13.974 }' 00:10:13.974 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.974 10:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.233 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.233 10:21:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:14.233 10:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.233 10:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.233 10:21:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.233 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:14.233 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:14.233 10:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.233 10:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.493 [2024-11-19 10:21:28.014515] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:14.493 10:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.493 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:14.493 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.493 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.493 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.493 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.493 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:14.493 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.493 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.493 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.493 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.493 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.493 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.493 10:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.493 10:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.493 10:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.493 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.493 "name": "Existed_Raid", 00:10:14.493 "uuid": "afa4163e-38f4-44bc-8060-d96eac29d627", 00:10:14.493 "strip_size_kb": 64, 00:10:14.493 "state": "configuring", 00:10:14.493 "raid_level": "raid0", 00:10:14.493 "superblock": true, 00:10:14.493 "num_base_bdevs": 4, 00:10:14.493 "num_base_bdevs_discovered": 2, 00:10:14.493 "num_base_bdevs_operational": 4, 00:10:14.493 "base_bdevs_list": [ 00:10:14.493 { 00:10:14.493 "name": null, 00:10:14.493 "uuid": "aefbb0b9-6099-4a20-9b6d-a7b2119a43b1", 00:10:14.493 "is_configured": false, 00:10:14.493 "data_offset": 0, 00:10:14.493 "data_size": 63488 00:10:14.493 }, 00:10:14.493 { 00:10:14.493 "name": null, 00:10:14.493 "uuid": "f74a7438-068e-44f9-a12a-9d7018a51383", 00:10:14.493 "is_configured": false, 00:10:14.493 "data_offset": 0, 00:10:14.493 "data_size": 63488 00:10:14.493 }, 00:10:14.493 { 00:10:14.493 "name": "BaseBdev3", 00:10:14.493 "uuid": "ff9ace36-59ce-44cb-893b-284c022b7a67", 00:10:14.493 "is_configured": true, 00:10:14.493 "data_offset": 2048, 00:10:14.493 "data_size": 63488 00:10:14.493 }, 00:10:14.493 { 00:10:14.493 "name": "BaseBdev4", 00:10:14.493 "uuid": "015b45d9-b4f8-43a9-959c-c1f36a2efca7", 00:10:14.493 "is_configured": true, 00:10:14.493 "data_offset": 2048, 00:10:14.493 "data_size": 63488 00:10:14.493 } 00:10:14.493 ] 00:10:14.493 }' 00:10:14.493 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.493 10:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.063 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.063 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:15.063 10:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.063 10:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.063 10:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.063 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:15.063 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:15.063 10:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.063 10:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.063 [2024-11-19 10:21:28.597666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:15.063 10:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.063 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:15.063 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.063 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:15.063 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.063 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.063 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.063 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.063 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.063 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.063 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.063 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.063 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.063 10:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.063 10:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.063 10:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.063 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.063 "name": "Existed_Raid", 00:10:15.063 "uuid": "afa4163e-38f4-44bc-8060-d96eac29d627", 00:10:15.063 "strip_size_kb": 64, 00:10:15.063 "state": "configuring", 00:10:15.063 "raid_level": "raid0", 00:10:15.063 "superblock": true, 00:10:15.063 "num_base_bdevs": 4, 00:10:15.063 "num_base_bdevs_discovered": 3, 00:10:15.063 "num_base_bdevs_operational": 4, 00:10:15.063 "base_bdevs_list": [ 00:10:15.063 { 00:10:15.063 "name": null, 00:10:15.063 "uuid": "aefbb0b9-6099-4a20-9b6d-a7b2119a43b1", 00:10:15.063 "is_configured": false, 00:10:15.063 "data_offset": 0, 00:10:15.063 "data_size": 63488 00:10:15.063 }, 00:10:15.063 { 00:10:15.063 "name": "BaseBdev2", 00:10:15.063 "uuid": "f74a7438-068e-44f9-a12a-9d7018a51383", 00:10:15.063 "is_configured": true, 00:10:15.063 "data_offset": 2048, 00:10:15.063 "data_size": 63488 00:10:15.063 }, 00:10:15.063 { 00:10:15.063 "name": "BaseBdev3", 00:10:15.063 "uuid": "ff9ace36-59ce-44cb-893b-284c022b7a67", 00:10:15.063 "is_configured": true, 00:10:15.063 "data_offset": 2048, 00:10:15.063 "data_size": 63488 00:10:15.063 }, 00:10:15.063 { 00:10:15.063 "name": "BaseBdev4", 00:10:15.063 "uuid": "015b45d9-b4f8-43a9-959c-c1f36a2efca7", 00:10:15.063 "is_configured": true, 00:10:15.063 "data_offset": 2048, 00:10:15.063 "data_size": 63488 00:10:15.063 } 00:10:15.063 ] 00:10:15.063 }' 00:10:15.063 10:21:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.063 10:21:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.323 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:15.323 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.323 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.323 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.323 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.323 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:15.323 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:15.323 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.323 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.323 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.323 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u aefbb0b9-6099-4a20-9b6d-a7b2119a43b1 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.584 [2024-11-19 10:21:29.160587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:15.584 [2024-11-19 10:21:29.160893] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:15.584 [2024-11-19 10:21:29.160930] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:15.584 [2024-11-19 10:21:29.161283] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:15.584 [2024-11-19 10:21:29.161459] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:15.584 [2024-11-19 10:21:29.161505] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raNewBaseBdev 00:10:15.584 id_bdev 0x617000008200 00:10:15.584 [2024-11-19 10:21:29.161659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.584 [ 00:10:15.584 { 00:10:15.584 "name": "NewBaseBdev", 00:10:15.584 "aliases": [ 00:10:15.584 "aefbb0b9-6099-4a20-9b6d-a7b2119a43b1" 00:10:15.584 ], 00:10:15.584 "product_name": "Malloc disk", 00:10:15.584 "block_size": 512, 00:10:15.584 "num_blocks": 65536, 00:10:15.584 "uuid": "aefbb0b9-6099-4a20-9b6d-a7b2119a43b1", 00:10:15.584 "assigned_rate_limits": { 00:10:15.584 "rw_ios_per_sec": 0, 00:10:15.584 "rw_mbytes_per_sec": 0, 00:10:15.584 "r_mbytes_per_sec": 0, 00:10:15.584 "w_mbytes_per_sec": 0 00:10:15.584 }, 00:10:15.584 "claimed": true, 00:10:15.584 "claim_type": "exclusive_write", 00:10:15.584 "zoned": false, 00:10:15.584 "supported_io_types": { 00:10:15.584 "read": true, 00:10:15.584 "write": true, 00:10:15.584 "unmap": true, 00:10:15.584 "flush": true, 00:10:15.584 "reset": true, 00:10:15.584 "nvme_admin": false, 00:10:15.584 "nvme_io": false, 00:10:15.584 "nvme_io_md": false, 00:10:15.584 "write_zeroes": true, 00:10:15.584 "zcopy": true, 00:10:15.584 "get_zone_info": false, 00:10:15.584 "zone_management": false, 00:10:15.584 "zone_append": false, 00:10:15.584 "compare": false, 00:10:15.584 "compare_and_write": false, 00:10:15.584 "abort": true, 00:10:15.584 "seek_hole": false, 00:10:15.584 "seek_data": false, 00:10:15.584 "copy": true, 00:10:15.584 "nvme_iov_md": false 00:10:15.584 }, 00:10:15.584 "memory_domains": [ 00:10:15.584 { 00:10:15.584 "dma_device_id": "system", 00:10:15.584 "dma_device_type": 1 00:10:15.584 }, 00:10:15.584 { 00:10:15.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.584 "dma_device_type": 2 00:10:15.584 } 00:10:15.584 ], 00:10:15.584 "driver_specific": {} 00:10:15.584 } 00:10:15.584 ] 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.584 "name": "Existed_Raid", 00:10:15.584 "uuid": "afa4163e-38f4-44bc-8060-d96eac29d627", 00:10:15.584 "strip_size_kb": 64, 00:10:15.584 "state": "online", 00:10:15.584 "raid_level": "raid0", 00:10:15.584 "superblock": true, 00:10:15.584 "num_base_bdevs": 4, 00:10:15.584 "num_base_bdevs_discovered": 4, 00:10:15.584 "num_base_bdevs_operational": 4, 00:10:15.584 "base_bdevs_list": [ 00:10:15.584 { 00:10:15.584 "name": "NewBaseBdev", 00:10:15.584 "uuid": "aefbb0b9-6099-4a20-9b6d-a7b2119a43b1", 00:10:15.584 "is_configured": true, 00:10:15.584 "data_offset": 2048, 00:10:15.584 "data_size": 63488 00:10:15.584 }, 00:10:15.584 { 00:10:15.584 "name": "BaseBdev2", 00:10:15.584 "uuid": "f74a7438-068e-44f9-a12a-9d7018a51383", 00:10:15.584 "is_configured": true, 00:10:15.584 "data_offset": 2048, 00:10:15.584 "data_size": 63488 00:10:15.584 }, 00:10:15.584 { 00:10:15.584 "name": "BaseBdev3", 00:10:15.584 "uuid": "ff9ace36-59ce-44cb-893b-284c022b7a67", 00:10:15.584 "is_configured": true, 00:10:15.584 "data_offset": 2048, 00:10:15.584 "data_size": 63488 00:10:15.584 }, 00:10:15.584 { 00:10:15.584 "name": "BaseBdev4", 00:10:15.584 "uuid": "015b45d9-b4f8-43a9-959c-c1f36a2efca7", 00:10:15.584 "is_configured": true, 00:10:15.584 "data_offset": 2048, 00:10:15.584 "data_size": 63488 00:10:15.584 } 00:10:15.584 ] 00:10:15.584 }' 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.584 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.844 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.105 [2024-11-19 10:21:29.636190] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:16.105 "name": "Existed_Raid", 00:10:16.105 "aliases": [ 00:10:16.105 "afa4163e-38f4-44bc-8060-d96eac29d627" 00:10:16.105 ], 00:10:16.105 "product_name": "Raid Volume", 00:10:16.105 "block_size": 512, 00:10:16.105 "num_blocks": 253952, 00:10:16.105 "uuid": "afa4163e-38f4-44bc-8060-d96eac29d627", 00:10:16.105 "assigned_rate_limits": { 00:10:16.105 "rw_ios_per_sec": 0, 00:10:16.105 "rw_mbytes_per_sec": 0, 00:10:16.105 "r_mbytes_per_sec": 0, 00:10:16.105 "w_mbytes_per_sec": 0 00:10:16.105 }, 00:10:16.105 "claimed": false, 00:10:16.105 "zoned": false, 00:10:16.105 "supported_io_types": { 00:10:16.105 "read": true, 00:10:16.105 "write": true, 00:10:16.105 "unmap": true, 00:10:16.105 "flush": true, 00:10:16.105 "reset": true, 00:10:16.105 "nvme_admin": false, 00:10:16.105 "nvme_io": false, 00:10:16.105 "nvme_io_md": false, 00:10:16.105 "write_zeroes": true, 00:10:16.105 "zcopy": false, 00:10:16.105 "get_zone_info": false, 00:10:16.105 "zone_management": false, 00:10:16.105 "zone_append": false, 00:10:16.105 "compare": false, 00:10:16.105 "compare_and_write": false, 00:10:16.105 "abort": false, 00:10:16.105 "seek_hole": false, 00:10:16.105 "seek_data": false, 00:10:16.105 "copy": false, 00:10:16.105 "nvme_iov_md": false 00:10:16.105 }, 00:10:16.105 "memory_domains": [ 00:10:16.105 { 00:10:16.105 "dma_device_id": "system", 00:10:16.105 "dma_device_type": 1 00:10:16.105 }, 00:10:16.105 { 00:10:16.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.105 "dma_device_type": 2 00:10:16.105 }, 00:10:16.105 { 00:10:16.105 "dma_device_id": "system", 00:10:16.105 "dma_device_type": 1 00:10:16.105 }, 00:10:16.105 { 00:10:16.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.105 "dma_device_type": 2 00:10:16.105 }, 00:10:16.105 { 00:10:16.105 "dma_device_id": "system", 00:10:16.105 "dma_device_type": 1 00:10:16.105 }, 00:10:16.105 { 00:10:16.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.105 "dma_device_type": 2 00:10:16.105 }, 00:10:16.105 { 00:10:16.105 "dma_device_id": "system", 00:10:16.105 "dma_device_type": 1 00:10:16.105 }, 00:10:16.105 { 00:10:16.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.105 "dma_device_type": 2 00:10:16.105 } 00:10:16.105 ], 00:10:16.105 "driver_specific": { 00:10:16.105 "raid": { 00:10:16.105 "uuid": "afa4163e-38f4-44bc-8060-d96eac29d627", 00:10:16.105 "strip_size_kb": 64, 00:10:16.105 "state": "online", 00:10:16.105 "raid_level": "raid0", 00:10:16.105 "superblock": true, 00:10:16.105 "num_base_bdevs": 4, 00:10:16.105 "num_base_bdevs_discovered": 4, 00:10:16.105 "num_base_bdevs_operational": 4, 00:10:16.105 "base_bdevs_list": [ 00:10:16.105 { 00:10:16.105 "name": "NewBaseBdev", 00:10:16.105 "uuid": "aefbb0b9-6099-4a20-9b6d-a7b2119a43b1", 00:10:16.105 "is_configured": true, 00:10:16.105 "data_offset": 2048, 00:10:16.105 "data_size": 63488 00:10:16.105 }, 00:10:16.105 { 00:10:16.105 "name": "BaseBdev2", 00:10:16.105 "uuid": "f74a7438-068e-44f9-a12a-9d7018a51383", 00:10:16.105 "is_configured": true, 00:10:16.105 "data_offset": 2048, 00:10:16.105 "data_size": 63488 00:10:16.105 }, 00:10:16.105 { 00:10:16.105 "name": "BaseBdev3", 00:10:16.105 "uuid": "ff9ace36-59ce-44cb-893b-284c022b7a67", 00:10:16.105 "is_configured": true, 00:10:16.105 "data_offset": 2048, 00:10:16.105 "data_size": 63488 00:10:16.105 }, 00:10:16.105 { 00:10:16.105 "name": "BaseBdev4", 00:10:16.105 "uuid": "015b45d9-b4f8-43a9-959c-c1f36a2efca7", 00:10:16.105 "is_configured": true, 00:10:16.105 "data_offset": 2048, 00:10:16.105 "data_size": 63488 00:10:16.105 } 00:10:16.105 ] 00:10:16.105 } 00:10:16.105 } 00:10:16.105 }' 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:16.105 BaseBdev2 00:10:16.105 BaseBdev3 00:10:16.105 BaseBdev4' 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:16.105 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.106 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.106 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.365 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.365 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.365 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:16.365 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:16.365 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:16.365 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.365 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.365 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.365 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:16.366 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:16.366 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:16.366 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.366 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:16.366 [2024-11-19 10:21:29.959266] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:16.366 [2024-11-19 10:21:29.959294] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:16.366 [2024-11-19 10:21:29.959365] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:16.366 [2024-11-19 10:21:29.959430] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:16.366 [2024-11-19 10:21:29.959441] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:16.366 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.366 10:21:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 69844 00:10:16.366 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 69844 ']' 00:10:16.366 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 69844 00:10:16.366 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:16.366 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:16.366 10:21:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69844 00:10:16.366 killing process with pid 69844 00:10:16.366 10:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:16.366 10:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:16.366 10:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69844' 00:10:16.366 10:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 69844 00:10:16.366 [2024-11-19 10:21:30.004811] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:16.366 10:21:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 69844 00:10:16.625 [2024-11-19 10:21:30.392411] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:18.004 10:21:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:18.004 00:10:18.004 real 0m11.293s 00:10:18.004 user 0m18.050s 00:10:18.004 sys 0m1.971s 00:10:18.004 ************************************ 00:10:18.004 END TEST raid_state_function_test_sb 00:10:18.004 ************************************ 00:10:18.004 10:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.004 10:21:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:18.004 10:21:31 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:18.004 10:21:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:18.004 10:21:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.004 10:21:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:18.004 ************************************ 00:10:18.004 START TEST raid_superblock_test 00:10:18.004 ************************************ 00:10:18.004 10:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:10:18.004 10:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:18.004 10:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:18.004 10:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:18.004 10:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:18.004 10:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:18.004 10:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:18.004 10:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:18.004 10:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:18.004 10:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:18.004 10:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:18.004 10:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:18.004 10:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:18.004 10:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:18.004 10:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:18.004 10:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:18.004 10:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:18.004 10:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70514 00:10:18.004 10:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:18.004 10:21:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70514 00:10:18.004 10:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70514 ']' 00:10:18.004 10:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.004 10:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.004 10:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.004 10:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.004 10:21:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.004 [2024-11-19 10:21:31.582322] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:10:18.004 [2024-11-19 10:21:31.582530] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70514 ] 00:10:18.004 [2024-11-19 10:21:31.753690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.264 [2024-11-19 10:21:31.863533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.523 [2024-11-19 10:21:32.057010] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.523 [2024-11-19 10:21:32.057073] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.783 malloc1 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.783 [2024-11-19 10:21:32.462725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:18.783 [2024-11-19 10:21:32.462847] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.783 [2024-11-19 10:21:32.462889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:18.783 [2024-11-19 10:21:32.462920] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.783 [2024-11-19 10:21:32.465102] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.783 [2024-11-19 10:21:32.465170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:18.783 pt1 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.783 malloc2 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.783 [2024-11-19 10:21:32.522354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:18.783 [2024-11-19 10:21:32.522407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.783 [2024-11-19 10:21:32.522426] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:18.783 [2024-11-19 10:21:32.522435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.783 [2024-11-19 10:21:32.524476] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.783 [2024-11-19 10:21:32.524513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:18.783 pt2 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.783 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.043 malloc3 00:10:19.043 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.043 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:19.043 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.043 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.043 [2024-11-19 10:21:32.593831] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:19.043 [2024-11-19 10:21:32.593950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.043 [2024-11-19 10:21:32.593987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:19.043 [2024-11-19 10:21:32.594028] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.043 [2024-11-19 10:21:32.596021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.043 [2024-11-19 10:21:32.596090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:19.043 pt3 00:10:19.043 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.043 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:19.043 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:19.043 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:19.043 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:19.043 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:19.043 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:19.043 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:19.043 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:19.043 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:19.043 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.043 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.043 malloc4 00:10:19.043 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.043 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:19.043 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.043 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.043 [2024-11-19 10:21:32.650723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:19.043 [2024-11-19 10:21:32.650826] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.043 [2024-11-19 10:21:32.650862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:19.043 [2024-11-19 10:21:32.650889] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.043 [2024-11-19 10:21:32.652903] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.043 [2024-11-19 10:21:32.652973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:19.043 pt4 00:10:19.044 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.044 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:19.044 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:19.044 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:19.044 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.044 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.044 [2024-11-19 10:21:32.662737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:19.044 [2024-11-19 10:21:32.664523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:19.044 [2024-11-19 10:21:32.664637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:19.044 [2024-11-19 10:21:32.664699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:19.044 [2024-11-19 10:21:32.664883] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:19.044 [2024-11-19 10:21:32.664895] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:19.044 [2024-11-19 10:21:32.665149] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:19.044 [2024-11-19 10:21:32.665305] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:19.044 [2024-11-19 10:21:32.665322] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:19.044 [2024-11-19 10:21:32.665456] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.044 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.044 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:19.044 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.044 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.044 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.044 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.044 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.044 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.044 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.044 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.044 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.044 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.044 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.044 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.044 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.044 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.044 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.044 "name": "raid_bdev1", 00:10:19.044 "uuid": "115a9f52-3619-45f2-8dc9-3227b075cf36", 00:10:19.044 "strip_size_kb": 64, 00:10:19.044 "state": "online", 00:10:19.044 "raid_level": "raid0", 00:10:19.044 "superblock": true, 00:10:19.044 "num_base_bdevs": 4, 00:10:19.044 "num_base_bdevs_discovered": 4, 00:10:19.044 "num_base_bdevs_operational": 4, 00:10:19.044 "base_bdevs_list": [ 00:10:19.044 { 00:10:19.044 "name": "pt1", 00:10:19.044 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:19.044 "is_configured": true, 00:10:19.044 "data_offset": 2048, 00:10:19.044 "data_size": 63488 00:10:19.044 }, 00:10:19.044 { 00:10:19.044 "name": "pt2", 00:10:19.044 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.044 "is_configured": true, 00:10:19.044 "data_offset": 2048, 00:10:19.044 "data_size": 63488 00:10:19.044 }, 00:10:19.044 { 00:10:19.044 "name": "pt3", 00:10:19.044 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.044 "is_configured": true, 00:10:19.044 "data_offset": 2048, 00:10:19.044 "data_size": 63488 00:10:19.044 }, 00:10:19.044 { 00:10:19.044 "name": "pt4", 00:10:19.044 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:19.044 "is_configured": true, 00:10:19.044 "data_offset": 2048, 00:10:19.044 "data_size": 63488 00:10:19.044 } 00:10:19.044 ] 00:10:19.044 }' 00:10:19.044 10:21:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.044 10:21:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.613 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:19.613 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:19.613 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:19.613 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:19.613 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:19.613 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:19.613 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:19.613 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:19.613 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.613 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.613 [2024-11-19 10:21:33.098288] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.613 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.613 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:19.613 "name": "raid_bdev1", 00:10:19.613 "aliases": [ 00:10:19.613 "115a9f52-3619-45f2-8dc9-3227b075cf36" 00:10:19.613 ], 00:10:19.613 "product_name": "Raid Volume", 00:10:19.613 "block_size": 512, 00:10:19.613 "num_blocks": 253952, 00:10:19.613 "uuid": "115a9f52-3619-45f2-8dc9-3227b075cf36", 00:10:19.613 "assigned_rate_limits": { 00:10:19.613 "rw_ios_per_sec": 0, 00:10:19.613 "rw_mbytes_per_sec": 0, 00:10:19.613 "r_mbytes_per_sec": 0, 00:10:19.613 "w_mbytes_per_sec": 0 00:10:19.613 }, 00:10:19.613 "claimed": false, 00:10:19.613 "zoned": false, 00:10:19.613 "supported_io_types": { 00:10:19.613 "read": true, 00:10:19.613 "write": true, 00:10:19.613 "unmap": true, 00:10:19.613 "flush": true, 00:10:19.613 "reset": true, 00:10:19.613 "nvme_admin": false, 00:10:19.613 "nvme_io": false, 00:10:19.613 "nvme_io_md": false, 00:10:19.613 "write_zeroes": true, 00:10:19.613 "zcopy": false, 00:10:19.613 "get_zone_info": false, 00:10:19.613 "zone_management": false, 00:10:19.613 "zone_append": false, 00:10:19.613 "compare": false, 00:10:19.613 "compare_and_write": false, 00:10:19.613 "abort": false, 00:10:19.613 "seek_hole": false, 00:10:19.613 "seek_data": false, 00:10:19.613 "copy": false, 00:10:19.613 "nvme_iov_md": false 00:10:19.613 }, 00:10:19.613 "memory_domains": [ 00:10:19.613 { 00:10:19.613 "dma_device_id": "system", 00:10:19.613 "dma_device_type": 1 00:10:19.613 }, 00:10:19.613 { 00:10:19.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.613 "dma_device_type": 2 00:10:19.613 }, 00:10:19.613 { 00:10:19.614 "dma_device_id": "system", 00:10:19.614 "dma_device_type": 1 00:10:19.614 }, 00:10:19.614 { 00:10:19.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.614 "dma_device_type": 2 00:10:19.614 }, 00:10:19.614 { 00:10:19.614 "dma_device_id": "system", 00:10:19.614 "dma_device_type": 1 00:10:19.614 }, 00:10:19.614 { 00:10:19.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.614 "dma_device_type": 2 00:10:19.614 }, 00:10:19.614 { 00:10:19.614 "dma_device_id": "system", 00:10:19.614 "dma_device_type": 1 00:10:19.614 }, 00:10:19.614 { 00:10:19.614 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.614 "dma_device_type": 2 00:10:19.614 } 00:10:19.614 ], 00:10:19.614 "driver_specific": { 00:10:19.614 "raid": { 00:10:19.614 "uuid": "115a9f52-3619-45f2-8dc9-3227b075cf36", 00:10:19.614 "strip_size_kb": 64, 00:10:19.614 "state": "online", 00:10:19.614 "raid_level": "raid0", 00:10:19.614 "superblock": true, 00:10:19.614 "num_base_bdevs": 4, 00:10:19.614 "num_base_bdevs_discovered": 4, 00:10:19.614 "num_base_bdevs_operational": 4, 00:10:19.614 "base_bdevs_list": [ 00:10:19.614 { 00:10:19.614 "name": "pt1", 00:10:19.614 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:19.614 "is_configured": true, 00:10:19.614 "data_offset": 2048, 00:10:19.614 "data_size": 63488 00:10:19.614 }, 00:10:19.614 { 00:10:19.614 "name": "pt2", 00:10:19.614 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.614 "is_configured": true, 00:10:19.614 "data_offset": 2048, 00:10:19.614 "data_size": 63488 00:10:19.614 }, 00:10:19.614 { 00:10:19.614 "name": "pt3", 00:10:19.614 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.614 "is_configured": true, 00:10:19.614 "data_offset": 2048, 00:10:19.614 "data_size": 63488 00:10:19.614 }, 00:10:19.614 { 00:10:19.614 "name": "pt4", 00:10:19.614 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:19.614 "is_configured": true, 00:10:19.614 "data_offset": 2048, 00:10:19.614 "data_size": 63488 00:10:19.614 } 00:10:19.614 ] 00:10:19.614 } 00:10:19.614 } 00:10:19.614 }' 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:19.614 pt2 00:10:19.614 pt3 00:10:19.614 pt4' 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.614 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.874 [2024-11-19 10:21:33.433695] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=115a9f52-3619-45f2-8dc9-3227b075cf36 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 115a9f52-3619-45f2-8dc9-3227b075cf36 ']' 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.874 [2024-11-19 10:21:33.465323] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:19.874 [2024-11-19 10:21:33.465387] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:19.874 [2024-11-19 10:21:33.465469] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.874 [2024-11-19 10:21:33.465549] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:19.874 [2024-11-19 10:21:33.465563] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.874 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.874 [2024-11-19 10:21:33.625085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:19.874 [2024-11-19 10:21:33.626889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:19.874 [2024-11-19 10:21:33.626930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:19.874 [2024-11-19 10:21:33.626961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:19.874 [2024-11-19 10:21:33.627072] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:19.874 [2024-11-19 10:21:33.627156] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:19.874 [2024-11-19 10:21:33.627203] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:19.875 [2024-11-19 10:21:33.627264] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:19.875 [2024-11-19 10:21:33.627322] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:19.875 [2024-11-19 10:21:33.627366] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:19.875 request: 00:10:19.875 { 00:10:19.875 "name": "raid_bdev1", 00:10:19.875 "raid_level": "raid0", 00:10:19.875 "base_bdevs": [ 00:10:19.875 "malloc1", 00:10:19.875 "malloc2", 00:10:19.875 "malloc3", 00:10:19.875 "malloc4" 00:10:19.875 ], 00:10:19.875 "strip_size_kb": 64, 00:10:19.875 "superblock": false, 00:10:19.875 "method": "bdev_raid_create", 00:10:19.875 "req_id": 1 00:10:19.875 } 00:10:19.875 Got JSON-RPC error response 00:10:19.875 response: 00:10:19.875 { 00:10:19.875 "code": -17, 00:10:19.875 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:19.875 } 00:10:19.875 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:19.875 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:19.875 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:19.875 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:19.875 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:19.875 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:19.875 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.875 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.875 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.875 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.135 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:20.135 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:20.135 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:20.135 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.135 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.135 [2024-11-19 10:21:33.684958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:20.135 [2024-11-19 10:21:33.685040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.135 [2024-11-19 10:21:33.685058] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:20.135 [2024-11-19 10:21:33.685068] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.135 [2024-11-19 10:21:33.687227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.135 [2024-11-19 10:21:33.687266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:20.135 [2024-11-19 10:21:33.687342] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:20.135 [2024-11-19 10:21:33.687424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:20.135 pt1 00:10:20.135 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.135 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:20.135 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.135 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.135 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.135 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.135 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.135 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.135 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.135 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.135 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.135 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.135 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.135 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.135 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.135 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.135 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.135 "name": "raid_bdev1", 00:10:20.135 "uuid": "115a9f52-3619-45f2-8dc9-3227b075cf36", 00:10:20.135 "strip_size_kb": 64, 00:10:20.135 "state": "configuring", 00:10:20.135 "raid_level": "raid0", 00:10:20.135 "superblock": true, 00:10:20.135 "num_base_bdevs": 4, 00:10:20.135 "num_base_bdevs_discovered": 1, 00:10:20.135 "num_base_bdevs_operational": 4, 00:10:20.135 "base_bdevs_list": [ 00:10:20.135 { 00:10:20.135 "name": "pt1", 00:10:20.135 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:20.135 "is_configured": true, 00:10:20.135 "data_offset": 2048, 00:10:20.135 "data_size": 63488 00:10:20.135 }, 00:10:20.135 { 00:10:20.135 "name": null, 00:10:20.135 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.135 "is_configured": false, 00:10:20.135 "data_offset": 2048, 00:10:20.135 "data_size": 63488 00:10:20.135 }, 00:10:20.135 { 00:10:20.135 "name": null, 00:10:20.135 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.135 "is_configured": false, 00:10:20.135 "data_offset": 2048, 00:10:20.135 "data_size": 63488 00:10:20.135 }, 00:10:20.135 { 00:10:20.135 "name": null, 00:10:20.135 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:20.135 "is_configured": false, 00:10:20.135 "data_offset": 2048, 00:10:20.135 "data_size": 63488 00:10:20.135 } 00:10:20.135 ] 00:10:20.135 }' 00:10:20.135 10:21:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.135 10:21:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.395 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:20.395 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:20.395 10:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.395 10:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.395 [2024-11-19 10:21:34.076313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:20.395 [2024-11-19 10:21:34.076453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.395 [2024-11-19 10:21:34.076489] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:20.395 [2024-11-19 10:21:34.076542] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.395 [2024-11-19 10:21:34.077025] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.395 [2024-11-19 10:21:34.077086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:20.395 [2024-11-19 10:21:34.077201] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:20.395 [2024-11-19 10:21:34.077254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:20.395 pt2 00:10:20.395 10:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.395 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:20.395 10:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.395 10:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.395 [2024-11-19 10:21:34.084302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:20.395 10:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.395 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:20.395 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.395 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.395 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.395 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.395 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.395 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.395 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.395 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.395 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.395 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.395 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.395 10:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.395 10:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.395 10:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.395 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.395 "name": "raid_bdev1", 00:10:20.395 "uuid": "115a9f52-3619-45f2-8dc9-3227b075cf36", 00:10:20.395 "strip_size_kb": 64, 00:10:20.395 "state": "configuring", 00:10:20.395 "raid_level": "raid0", 00:10:20.395 "superblock": true, 00:10:20.395 "num_base_bdevs": 4, 00:10:20.395 "num_base_bdevs_discovered": 1, 00:10:20.395 "num_base_bdevs_operational": 4, 00:10:20.395 "base_bdevs_list": [ 00:10:20.395 { 00:10:20.395 "name": "pt1", 00:10:20.395 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:20.395 "is_configured": true, 00:10:20.395 "data_offset": 2048, 00:10:20.395 "data_size": 63488 00:10:20.395 }, 00:10:20.395 { 00:10:20.395 "name": null, 00:10:20.395 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.395 "is_configured": false, 00:10:20.395 "data_offset": 0, 00:10:20.395 "data_size": 63488 00:10:20.395 }, 00:10:20.395 { 00:10:20.395 "name": null, 00:10:20.395 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.395 "is_configured": false, 00:10:20.395 "data_offset": 2048, 00:10:20.395 "data_size": 63488 00:10:20.395 }, 00:10:20.395 { 00:10:20.395 "name": null, 00:10:20.395 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:20.395 "is_configured": false, 00:10:20.395 "data_offset": 2048, 00:10:20.395 "data_size": 63488 00:10:20.395 } 00:10:20.395 ] 00:10:20.395 }' 00:10:20.395 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.395 10:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.964 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:20.964 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:20.964 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:20.964 10:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.964 10:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.964 [2024-11-19 10:21:34.523546] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:20.964 [2024-11-19 10:21:34.523610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.964 [2024-11-19 10:21:34.523630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:20.964 [2024-11-19 10:21:34.523639] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.964 [2024-11-19 10:21:34.524113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.964 [2024-11-19 10:21:34.524136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:20.964 [2024-11-19 10:21:34.524224] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:20.964 [2024-11-19 10:21:34.524245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:20.964 pt2 00:10:20.964 10:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.964 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:20.964 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:20.964 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:20.964 10:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.964 10:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.965 [2024-11-19 10:21:34.531499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:20.965 [2024-11-19 10:21:34.531550] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.965 [2024-11-19 10:21:34.531573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:20.965 [2024-11-19 10:21:34.531583] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.965 [2024-11-19 10:21:34.531934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.965 [2024-11-19 10:21:34.531948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:20.965 [2024-11-19 10:21:34.532019] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:20.965 [2024-11-19 10:21:34.532035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:20.965 pt3 00:10:20.965 10:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.965 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:20.965 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:20.965 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:20.965 10:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.965 10:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.965 [2024-11-19 10:21:34.539462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:20.965 [2024-11-19 10:21:34.539522] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.965 [2024-11-19 10:21:34.539555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:20.965 [2024-11-19 10:21:34.539563] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.965 [2024-11-19 10:21:34.539906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.965 [2024-11-19 10:21:34.539920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:20.965 [2024-11-19 10:21:34.539976] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:20.965 [2024-11-19 10:21:34.539993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:20.965 [2024-11-19 10:21:34.540147] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:20.965 [2024-11-19 10:21:34.540160] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:20.965 [2024-11-19 10:21:34.540388] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:20.965 [2024-11-19 10:21:34.540528] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:20.965 [2024-11-19 10:21:34.540541] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:20.965 [2024-11-19 10:21:34.540670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.965 pt4 00:10:20.965 10:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.965 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:20.965 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:20.965 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:20.965 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.965 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.965 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.965 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.965 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:20.965 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.965 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.965 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.965 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.965 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.965 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.965 10:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.965 10:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.965 10:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.965 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.965 "name": "raid_bdev1", 00:10:20.965 "uuid": "115a9f52-3619-45f2-8dc9-3227b075cf36", 00:10:20.965 "strip_size_kb": 64, 00:10:20.965 "state": "online", 00:10:20.965 "raid_level": "raid0", 00:10:20.965 "superblock": true, 00:10:20.965 "num_base_bdevs": 4, 00:10:20.965 "num_base_bdevs_discovered": 4, 00:10:20.965 "num_base_bdevs_operational": 4, 00:10:20.965 "base_bdevs_list": [ 00:10:20.965 { 00:10:20.965 "name": "pt1", 00:10:20.965 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:20.965 "is_configured": true, 00:10:20.965 "data_offset": 2048, 00:10:20.965 "data_size": 63488 00:10:20.965 }, 00:10:20.965 { 00:10:20.965 "name": "pt2", 00:10:20.965 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.965 "is_configured": true, 00:10:20.965 "data_offset": 2048, 00:10:20.965 "data_size": 63488 00:10:20.965 }, 00:10:20.965 { 00:10:20.965 "name": "pt3", 00:10:20.965 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.965 "is_configured": true, 00:10:20.965 "data_offset": 2048, 00:10:20.965 "data_size": 63488 00:10:20.965 }, 00:10:20.965 { 00:10:20.965 "name": "pt4", 00:10:20.965 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:20.965 "is_configured": true, 00:10:20.965 "data_offset": 2048, 00:10:20.965 "data_size": 63488 00:10:20.965 } 00:10:20.965 ] 00:10:20.965 }' 00:10:20.965 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.965 10:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.225 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:21.225 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:21.225 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:21.225 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:21.225 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:21.225 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:21.225 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:21.225 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:21.225 10:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.225 10:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.225 [2024-11-19 10:21:34.935161] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.225 10:21:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.225 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:21.225 "name": "raid_bdev1", 00:10:21.225 "aliases": [ 00:10:21.225 "115a9f52-3619-45f2-8dc9-3227b075cf36" 00:10:21.225 ], 00:10:21.225 "product_name": "Raid Volume", 00:10:21.225 "block_size": 512, 00:10:21.225 "num_blocks": 253952, 00:10:21.225 "uuid": "115a9f52-3619-45f2-8dc9-3227b075cf36", 00:10:21.225 "assigned_rate_limits": { 00:10:21.225 "rw_ios_per_sec": 0, 00:10:21.225 "rw_mbytes_per_sec": 0, 00:10:21.225 "r_mbytes_per_sec": 0, 00:10:21.225 "w_mbytes_per_sec": 0 00:10:21.225 }, 00:10:21.225 "claimed": false, 00:10:21.225 "zoned": false, 00:10:21.225 "supported_io_types": { 00:10:21.225 "read": true, 00:10:21.225 "write": true, 00:10:21.225 "unmap": true, 00:10:21.225 "flush": true, 00:10:21.225 "reset": true, 00:10:21.225 "nvme_admin": false, 00:10:21.225 "nvme_io": false, 00:10:21.225 "nvme_io_md": false, 00:10:21.225 "write_zeroes": true, 00:10:21.225 "zcopy": false, 00:10:21.225 "get_zone_info": false, 00:10:21.225 "zone_management": false, 00:10:21.225 "zone_append": false, 00:10:21.225 "compare": false, 00:10:21.225 "compare_and_write": false, 00:10:21.225 "abort": false, 00:10:21.225 "seek_hole": false, 00:10:21.225 "seek_data": false, 00:10:21.225 "copy": false, 00:10:21.225 "nvme_iov_md": false 00:10:21.225 }, 00:10:21.225 "memory_domains": [ 00:10:21.225 { 00:10:21.225 "dma_device_id": "system", 00:10:21.225 "dma_device_type": 1 00:10:21.225 }, 00:10:21.225 { 00:10:21.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.225 "dma_device_type": 2 00:10:21.225 }, 00:10:21.225 { 00:10:21.225 "dma_device_id": "system", 00:10:21.225 "dma_device_type": 1 00:10:21.225 }, 00:10:21.225 { 00:10:21.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.225 "dma_device_type": 2 00:10:21.225 }, 00:10:21.225 { 00:10:21.225 "dma_device_id": "system", 00:10:21.225 "dma_device_type": 1 00:10:21.225 }, 00:10:21.225 { 00:10:21.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.225 "dma_device_type": 2 00:10:21.225 }, 00:10:21.225 { 00:10:21.225 "dma_device_id": "system", 00:10:21.225 "dma_device_type": 1 00:10:21.225 }, 00:10:21.225 { 00:10:21.225 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.225 "dma_device_type": 2 00:10:21.225 } 00:10:21.225 ], 00:10:21.225 "driver_specific": { 00:10:21.225 "raid": { 00:10:21.225 "uuid": "115a9f52-3619-45f2-8dc9-3227b075cf36", 00:10:21.225 "strip_size_kb": 64, 00:10:21.225 "state": "online", 00:10:21.225 "raid_level": "raid0", 00:10:21.225 "superblock": true, 00:10:21.225 "num_base_bdevs": 4, 00:10:21.225 "num_base_bdevs_discovered": 4, 00:10:21.225 "num_base_bdevs_operational": 4, 00:10:21.225 "base_bdevs_list": [ 00:10:21.225 { 00:10:21.225 "name": "pt1", 00:10:21.225 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:21.225 "is_configured": true, 00:10:21.225 "data_offset": 2048, 00:10:21.225 "data_size": 63488 00:10:21.225 }, 00:10:21.225 { 00:10:21.225 "name": "pt2", 00:10:21.225 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.225 "is_configured": true, 00:10:21.225 "data_offset": 2048, 00:10:21.225 "data_size": 63488 00:10:21.225 }, 00:10:21.225 { 00:10:21.225 "name": "pt3", 00:10:21.225 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.225 "is_configured": true, 00:10:21.225 "data_offset": 2048, 00:10:21.225 "data_size": 63488 00:10:21.225 }, 00:10:21.225 { 00:10:21.225 "name": "pt4", 00:10:21.225 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:21.226 "is_configured": true, 00:10:21.226 "data_offset": 2048, 00:10:21.226 "data_size": 63488 00:10:21.226 } 00:10:21.226 ] 00:10:21.226 } 00:10:21.226 } 00:10:21.226 }' 00:10:21.226 10:21:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:21.485 pt2 00:10:21.485 pt3 00:10:21.485 pt4' 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.485 10:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.485 [2024-11-19 10:21:35.254501] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.745 10:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.745 10:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 115a9f52-3619-45f2-8dc9-3227b075cf36 '!=' 115a9f52-3619-45f2-8dc9-3227b075cf36 ']' 00:10:21.745 10:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:21.745 10:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:21.745 10:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:21.745 10:21:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70514 00:10:21.745 10:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70514 ']' 00:10:21.745 10:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70514 00:10:21.745 10:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:21.745 10:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:21.745 10:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70514 00:10:21.745 10:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:21.745 10:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:21.745 10:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70514' 00:10:21.745 killing process with pid 70514 00:10:21.745 10:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70514 00:10:21.745 [2024-11-19 10:21:35.328983] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:21.745 [2024-11-19 10:21:35.329075] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:21.745 [2024-11-19 10:21:35.329143] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:21.745 [2024-11-19 10:21:35.329152] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:21.745 10:21:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70514 00:10:22.004 [2024-11-19 10:21:35.710006] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:23.382 10:21:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:23.382 00:10:23.382 real 0m5.274s 00:10:23.382 user 0m7.539s 00:10:23.382 sys 0m0.910s 00:10:23.382 10:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.382 ************************************ 00:10:23.382 END TEST raid_superblock_test 00:10:23.382 ************************************ 00:10:23.382 10:21:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.382 10:21:36 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:23.382 10:21:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:23.382 10:21:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.382 10:21:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:23.382 ************************************ 00:10:23.382 START TEST raid_read_error_test 00:10:23.382 ************************************ 00:10:23.382 10:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:10:23.382 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:23.382 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:23.382 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:23.382 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:23.382 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.382 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:23.382 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.382 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.382 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:23.382 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.382 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.382 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:23.382 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.382 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.382 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:23.382 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.382 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.382 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:23.382 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:23.382 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:23.382 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:23.382 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:23.382 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:23.382 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:23.382 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:23.382 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:23.382 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:23.382 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:23.383 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.KnlIPvJliv 00:10:23.383 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70768 00:10:23.383 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:23.383 10:21:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70768 00:10:23.383 10:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 70768 ']' 00:10:23.383 10:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.383 10:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.383 10:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.383 10:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.383 10:21:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.383 [2024-11-19 10:21:36.940522] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:10:23.383 [2024-11-19 10:21:36.940631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70768 ] 00:10:23.383 [2024-11-19 10:21:37.113495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.641 [2024-11-19 10:21:37.225801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.900 [2024-11-19 10:21:37.428826] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.900 [2024-11-19 10:21:37.428887] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.159 10:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.159 10:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:24.159 10:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:24.159 10:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:24.159 10:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.159 10:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.159 BaseBdev1_malloc 00:10:24.159 10:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.159 10:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:24.159 10:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.159 10:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.159 true 00:10:24.159 10:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.159 10:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:24.159 10:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.159 10:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.159 [2024-11-19 10:21:37.825697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:24.159 [2024-11-19 10:21:37.825748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.159 [2024-11-19 10:21:37.825766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:24.159 [2024-11-19 10:21:37.825776] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.159 [2024-11-19 10:21:37.827821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.159 [2024-11-19 10:21:37.827861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:24.159 BaseBdev1 00:10:24.159 10:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.159 10:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:24.159 10:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:24.159 10:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.159 10:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.159 BaseBdev2_malloc 00:10:24.159 10:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.159 10:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:24.159 10:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.159 10:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.159 true 00:10:24.159 10:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.159 10:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:24.159 10:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.159 10:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.159 [2024-11-19 10:21:37.888337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:24.159 [2024-11-19 10:21:37.888401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.159 [2024-11-19 10:21:37.888417] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:24.159 [2024-11-19 10:21:37.888428] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.159 [2024-11-19 10:21:37.890459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.159 [2024-11-19 10:21:37.890498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:24.159 BaseBdev2 00:10:24.159 10:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.159 10:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:24.159 10:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:24.160 10:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.160 10:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.419 BaseBdev3_malloc 00:10:24.419 10:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.419 10:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:24.419 10:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.419 10:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.419 true 00:10:24.419 10:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.419 10:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:24.419 10:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.419 10:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.419 [2024-11-19 10:21:37.973684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:24.419 [2024-11-19 10:21:37.973748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.419 [2024-11-19 10:21:37.973765] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:24.419 [2024-11-19 10:21:37.973778] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.419 [2024-11-19 10:21:37.975872] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.419 [2024-11-19 10:21:37.975908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:24.419 BaseBdev3 00:10:24.419 10:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.419 10:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:24.419 10:21:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:24.419 10:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.419 10:21:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.419 BaseBdev4_malloc 00:10:24.419 10:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.419 10:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:24.419 10:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.419 10:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.419 true 00:10:24.419 10:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.419 10:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:24.419 10:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.419 10:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.419 [2024-11-19 10:21:38.040292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:24.419 [2024-11-19 10:21:38.040340] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.419 [2024-11-19 10:21:38.040355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:24.419 [2024-11-19 10:21:38.040365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.419 [2024-11-19 10:21:38.042360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.419 [2024-11-19 10:21:38.042398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:24.419 BaseBdev4 00:10:24.419 10:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.419 10:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:24.419 10:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.419 10:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.419 [2024-11-19 10:21:38.052329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:24.419 [2024-11-19 10:21:38.054139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:24.419 [2024-11-19 10:21:38.054214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:24.419 [2024-11-19 10:21:38.054276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:24.419 [2024-11-19 10:21:38.054490] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:24.419 [2024-11-19 10:21:38.054529] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:24.419 [2024-11-19 10:21:38.054765] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:24.419 [2024-11-19 10:21:38.054931] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:24.419 [2024-11-19 10:21:38.054949] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:24.419 [2024-11-19 10:21:38.055108] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.419 10:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.419 10:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:24.419 10:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.419 10:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.419 10:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.419 10:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.419 10:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.419 10:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.419 10:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.419 10:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.419 10:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.419 10:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.419 10:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.419 10:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.419 10:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.419 10:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.419 10:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.419 "name": "raid_bdev1", 00:10:24.419 "uuid": "4ee15d0c-5727-47d8-b912-e9a83cf03cf3", 00:10:24.419 "strip_size_kb": 64, 00:10:24.419 "state": "online", 00:10:24.419 "raid_level": "raid0", 00:10:24.419 "superblock": true, 00:10:24.419 "num_base_bdevs": 4, 00:10:24.419 "num_base_bdevs_discovered": 4, 00:10:24.419 "num_base_bdevs_operational": 4, 00:10:24.419 "base_bdevs_list": [ 00:10:24.419 { 00:10:24.419 "name": "BaseBdev1", 00:10:24.419 "uuid": "94dc6e8c-9931-57b5-bc69-932ca32d5ceb", 00:10:24.419 "is_configured": true, 00:10:24.420 "data_offset": 2048, 00:10:24.420 "data_size": 63488 00:10:24.420 }, 00:10:24.420 { 00:10:24.420 "name": "BaseBdev2", 00:10:24.420 "uuid": "858634e7-0b23-5de8-829e-6f98c56729a9", 00:10:24.420 "is_configured": true, 00:10:24.420 "data_offset": 2048, 00:10:24.420 "data_size": 63488 00:10:24.420 }, 00:10:24.420 { 00:10:24.420 "name": "BaseBdev3", 00:10:24.420 "uuid": "fc5860c1-3017-523e-8c07-476f63967817", 00:10:24.420 "is_configured": true, 00:10:24.420 "data_offset": 2048, 00:10:24.420 "data_size": 63488 00:10:24.420 }, 00:10:24.420 { 00:10:24.420 "name": "BaseBdev4", 00:10:24.420 "uuid": "25085446-99ae-5969-aab7-0fdf7c522854", 00:10:24.420 "is_configured": true, 00:10:24.420 "data_offset": 2048, 00:10:24.420 "data_size": 63488 00:10:24.420 } 00:10:24.420 ] 00:10:24.420 }' 00:10:24.420 10:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.420 10:21:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.988 10:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:24.988 10:21:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:24.988 [2024-11-19 10:21:38.576614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:25.958 10:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:25.958 10:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.958 10:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.958 10:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.958 10:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:25.958 10:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:25.958 10:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:25.958 10:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:25.958 10:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:25.958 10:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:25.958 10:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:25.958 10:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.958 10:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.958 10:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.958 10:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.958 10:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.958 10:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.958 10:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.958 10:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:25.958 10:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.958 10:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.958 10:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.958 10:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.958 "name": "raid_bdev1", 00:10:25.958 "uuid": "4ee15d0c-5727-47d8-b912-e9a83cf03cf3", 00:10:25.958 "strip_size_kb": 64, 00:10:25.958 "state": "online", 00:10:25.958 "raid_level": "raid0", 00:10:25.958 "superblock": true, 00:10:25.958 "num_base_bdevs": 4, 00:10:25.958 "num_base_bdevs_discovered": 4, 00:10:25.958 "num_base_bdevs_operational": 4, 00:10:25.958 "base_bdevs_list": [ 00:10:25.958 { 00:10:25.958 "name": "BaseBdev1", 00:10:25.958 "uuid": "94dc6e8c-9931-57b5-bc69-932ca32d5ceb", 00:10:25.958 "is_configured": true, 00:10:25.958 "data_offset": 2048, 00:10:25.958 "data_size": 63488 00:10:25.958 }, 00:10:25.958 { 00:10:25.958 "name": "BaseBdev2", 00:10:25.958 "uuid": "858634e7-0b23-5de8-829e-6f98c56729a9", 00:10:25.958 "is_configured": true, 00:10:25.958 "data_offset": 2048, 00:10:25.958 "data_size": 63488 00:10:25.958 }, 00:10:25.958 { 00:10:25.958 "name": "BaseBdev3", 00:10:25.958 "uuid": "fc5860c1-3017-523e-8c07-476f63967817", 00:10:25.958 "is_configured": true, 00:10:25.958 "data_offset": 2048, 00:10:25.958 "data_size": 63488 00:10:25.958 }, 00:10:25.958 { 00:10:25.958 "name": "BaseBdev4", 00:10:25.958 "uuid": "25085446-99ae-5969-aab7-0fdf7c522854", 00:10:25.958 "is_configured": true, 00:10:25.958 "data_offset": 2048, 00:10:25.958 "data_size": 63488 00:10:25.958 } 00:10:25.958 ] 00:10:25.958 }' 00:10:25.958 10:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.958 10:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.217 10:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:26.217 10:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.217 10:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.217 [2024-11-19 10:21:39.934314] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:26.217 [2024-11-19 10:21:39.934350] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:26.217 [2024-11-19 10:21:39.936940] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:26.217 [2024-11-19 10:21:39.937020] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.217 [2024-11-19 10:21:39.937064] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:26.217 [2024-11-19 10:21:39.937076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:26.217 { 00:10:26.217 "results": [ 00:10:26.217 { 00:10:26.217 "job": "raid_bdev1", 00:10:26.217 "core_mask": "0x1", 00:10:26.217 "workload": "randrw", 00:10:26.217 "percentage": 50, 00:10:26.217 "status": "finished", 00:10:26.217 "queue_depth": 1, 00:10:26.217 "io_size": 131072, 00:10:26.217 "runtime": 1.358537, 00:10:26.217 "iops": 16331.539001146086, 00:10:26.217 "mibps": 2041.4423751432607, 00:10:26.217 "io_failed": 1, 00:10:26.217 "io_timeout": 0, 00:10:26.217 "avg_latency_us": 85.13951142401218, 00:10:26.217 "min_latency_us": 25.823580786026202, 00:10:26.217 "max_latency_us": 1459.5353711790392 00:10:26.217 } 00:10:26.217 ], 00:10:26.217 "core_count": 1 00:10:26.217 } 00:10:26.217 10:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.217 10:21:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70768 00:10:26.217 10:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 70768 ']' 00:10:26.217 10:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 70768 00:10:26.217 10:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:26.217 10:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:26.217 10:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70768 00:10:26.217 10:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:26.217 10:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:26.217 killing process with pid 70768 00:10:26.217 10:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70768' 00:10:26.217 10:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 70768 00:10:26.217 [2024-11-19 10:21:39.970266] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:26.217 10:21:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 70768 00:10:26.785 [2024-11-19 10:21:40.292023] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:27.722 10:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.KnlIPvJliv 00:10:27.722 10:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:27.722 10:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:27.722 10:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:27.722 10:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:27.722 10:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:27.722 10:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:27.722 10:21:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:27.722 00:10:27.722 real 0m4.600s 00:10:27.722 user 0m5.395s 00:10:27.722 sys 0m0.575s 00:10:27.722 10:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.722 10:21:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.722 ************************************ 00:10:27.722 END TEST raid_read_error_test 00:10:27.722 ************************************ 00:10:27.722 10:21:41 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:27.722 10:21:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:27.722 10:21:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.722 10:21:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:27.982 ************************************ 00:10:27.982 START TEST raid_write_error_test 00:10:27.982 ************************************ 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CxHFssRgqV 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70913 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70913 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 70913 ']' 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:27.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:27.982 10:21:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.982 [2024-11-19 10:21:41.606002] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:10:27.982 [2024-11-19 10:21:41.606114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70913 ] 00:10:28.242 [2024-11-19 10:21:41.763908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.242 [2024-11-19 10:21:41.875267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.501 [2024-11-19 10:21:42.071839] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.502 [2024-11-19 10:21:42.071884] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.761 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:28.761 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:28.761 10:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:28.761 10:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:28.761 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.761 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.761 BaseBdev1_malloc 00:10:28.761 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.761 10:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:28.761 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.761 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.761 true 00:10:28.761 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.761 10:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:28.761 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.761 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.761 [2024-11-19 10:21:42.496572] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:28.761 [2024-11-19 10:21:42.496632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.761 [2024-11-19 10:21:42.496668] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:28.761 [2024-11-19 10:21:42.496679] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.761 [2024-11-19 10:21:42.498765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.761 [2024-11-19 10:21:42.498805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:28.761 BaseBdev1 00:10:28.761 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.761 10:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:28.761 10:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:28.761 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.761 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.021 BaseBdev2_malloc 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.021 true 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.021 [2024-11-19 10:21:42.560720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:29.021 [2024-11-19 10:21:42.560778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.021 [2024-11-19 10:21:42.560793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:29.021 [2024-11-19 10:21:42.560803] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.021 [2024-11-19 10:21:42.562843] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.021 [2024-11-19 10:21:42.562880] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:29.021 BaseBdev2 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.021 BaseBdev3_malloc 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.021 true 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.021 [2024-11-19 10:21:42.637883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:29.021 [2024-11-19 10:21:42.637941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.021 [2024-11-19 10:21:42.637959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:29.021 [2024-11-19 10:21:42.637969] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.021 [2024-11-19 10:21:42.640292] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.021 [2024-11-19 10:21:42.640338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:29.021 BaseBdev3 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.021 BaseBdev4_malloc 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.021 true 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.021 [2024-11-19 10:21:42.705997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:29.021 [2024-11-19 10:21:42.706058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.021 [2024-11-19 10:21:42.706090] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:29.021 [2024-11-19 10:21:42.706101] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.021 [2024-11-19 10:21:42.708121] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.021 [2024-11-19 10:21:42.708163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:29.021 BaseBdev4 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.021 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.021 [2024-11-19 10:21:42.718046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:29.021 [2024-11-19 10:21:42.719768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.021 [2024-11-19 10:21:42.719846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:29.022 [2024-11-19 10:21:42.719909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:29.022 [2024-11-19 10:21:42.720125] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:29.022 [2024-11-19 10:21:42.720150] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:29.022 [2024-11-19 10:21:42.720374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:29.022 [2024-11-19 10:21:42.720534] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:29.022 [2024-11-19 10:21:42.720551] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:29.022 [2024-11-19 10:21:42.720691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.022 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.022 10:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:29.022 10:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.022 10:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.022 10:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.022 10:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.022 10:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.022 10:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.022 10:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.022 10:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.022 10:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.022 10:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.022 10:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.022 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.022 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.022 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.022 10:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.022 "name": "raid_bdev1", 00:10:29.022 "uuid": "0a20de70-f8f8-40be-9ca9-e102250d11f5", 00:10:29.022 "strip_size_kb": 64, 00:10:29.022 "state": "online", 00:10:29.022 "raid_level": "raid0", 00:10:29.022 "superblock": true, 00:10:29.022 "num_base_bdevs": 4, 00:10:29.022 "num_base_bdevs_discovered": 4, 00:10:29.022 "num_base_bdevs_operational": 4, 00:10:29.022 "base_bdevs_list": [ 00:10:29.022 { 00:10:29.022 "name": "BaseBdev1", 00:10:29.022 "uuid": "633d1107-a38f-5149-9b7a-b3dffcd25b53", 00:10:29.022 "is_configured": true, 00:10:29.022 "data_offset": 2048, 00:10:29.022 "data_size": 63488 00:10:29.022 }, 00:10:29.022 { 00:10:29.022 "name": "BaseBdev2", 00:10:29.022 "uuid": "91fc9459-3486-5574-9864-ca1c58d9eb54", 00:10:29.022 "is_configured": true, 00:10:29.022 "data_offset": 2048, 00:10:29.022 "data_size": 63488 00:10:29.022 }, 00:10:29.022 { 00:10:29.022 "name": "BaseBdev3", 00:10:29.022 "uuid": "26a4edd0-506a-50f9-8be7-b2c2d7d4da6c", 00:10:29.022 "is_configured": true, 00:10:29.022 "data_offset": 2048, 00:10:29.022 "data_size": 63488 00:10:29.022 }, 00:10:29.022 { 00:10:29.022 "name": "BaseBdev4", 00:10:29.022 "uuid": "b6cbd0d8-a4f2-55b5-94ba-b0d65d9b4586", 00:10:29.022 "is_configured": true, 00:10:29.022 "data_offset": 2048, 00:10:29.022 "data_size": 63488 00:10:29.022 } 00:10:29.022 ] 00:10:29.022 }' 00:10:29.022 10:21:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.022 10:21:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.591 10:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:29.591 10:21:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:29.591 [2024-11-19 10:21:43.266465] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:30.527 10:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:30.527 10:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.527 10:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.527 10:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.527 10:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:30.527 10:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:30.527 10:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:30.527 10:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:30.527 10:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:30.527 10:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:30.527 10:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.527 10:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.527 10:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.527 10:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.527 10:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.527 10:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.527 10:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.527 10:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.527 10:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.527 10:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.527 10:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.527 10:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.527 10:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.527 "name": "raid_bdev1", 00:10:30.527 "uuid": "0a20de70-f8f8-40be-9ca9-e102250d11f5", 00:10:30.527 "strip_size_kb": 64, 00:10:30.527 "state": "online", 00:10:30.527 "raid_level": "raid0", 00:10:30.527 "superblock": true, 00:10:30.527 "num_base_bdevs": 4, 00:10:30.527 "num_base_bdevs_discovered": 4, 00:10:30.527 "num_base_bdevs_operational": 4, 00:10:30.527 "base_bdevs_list": [ 00:10:30.527 { 00:10:30.527 "name": "BaseBdev1", 00:10:30.527 "uuid": "633d1107-a38f-5149-9b7a-b3dffcd25b53", 00:10:30.527 "is_configured": true, 00:10:30.527 "data_offset": 2048, 00:10:30.527 "data_size": 63488 00:10:30.527 }, 00:10:30.527 { 00:10:30.527 "name": "BaseBdev2", 00:10:30.527 "uuid": "91fc9459-3486-5574-9864-ca1c58d9eb54", 00:10:30.527 "is_configured": true, 00:10:30.527 "data_offset": 2048, 00:10:30.527 "data_size": 63488 00:10:30.527 }, 00:10:30.527 { 00:10:30.527 "name": "BaseBdev3", 00:10:30.527 "uuid": "26a4edd0-506a-50f9-8be7-b2c2d7d4da6c", 00:10:30.527 "is_configured": true, 00:10:30.527 "data_offset": 2048, 00:10:30.527 "data_size": 63488 00:10:30.527 }, 00:10:30.527 { 00:10:30.527 "name": "BaseBdev4", 00:10:30.527 "uuid": "b6cbd0d8-a4f2-55b5-94ba-b0d65d9b4586", 00:10:30.527 "is_configured": true, 00:10:30.527 "data_offset": 2048, 00:10:30.527 "data_size": 63488 00:10:30.527 } 00:10:30.527 ] 00:10:30.527 }' 00:10:30.527 10:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.527 10:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.097 10:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:31.097 10:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.097 10:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.097 [2024-11-19 10:21:44.630444] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:31.097 [2024-11-19 10:21:44.630481] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:31.097 [2024-11-19 10:21:44.633256] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:31.097 [2024-11-19 10:21:44.633320] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.097 [2024-11-19 10:21:44.633365] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:31.097 [2024-11-19 10:21:44.633376] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:31.097 10:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.097 10:21:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70913 00:10:31.097 10:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 70913 ']' 00:10:31.097 10:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 70913 00:10:31.097 { 00:10:31.097 "results": [ 00:10:31.097 { 00:10:31.097 "job": "raid_bdev1", 00:10:31.097 "core_mask": "0x1", 00:10:31.097 "workload": "randrw", 00:10:31.097 "percentage": 50, 00:10:31.097 "status": "finished", 00:10:31.097 "queue_depth": 1, 00:10:31.097 "io_size": 131072, 00:10:31.097 "runtime": 1.364842, 00:10:31.097 "iops": 16119.081915708924, 00:10:31.097 "mibps": 2014.8852394636156, 00:10:31.097 "io_failed": 1, 00:10:31.097 "io_timeout": 0, 00:10:31.097 "avg_latency_us": 86.24863316058084, 00:10:31.097 "min_latency_us": 26.382532751091702, 00:10:31.097 "max_latency_us": 1352.216593886463 00:10:31.097 } 00:10:31.097 ], 00:10:31.097 "core_count": 1 00:10:31.097 } 00:10:31.097 10:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:31.097 10:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:31.097 10:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70913 00:10:31.097 10:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:31.097 10:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:31.097 killing process with pid 70913 00:10:31.097 10:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70913' 00:10:31.097 10:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 70913 00:10:31.097 [2024-11-19 10:21:44.673942] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:31.097 10:21:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 70913 00:10:31.356 [2024-11-19 10:21:44.994815] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:32.735 10:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CxHFssRgqV 00:10:32.736 10:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:32.736 10:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:32.736 10:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:32.736 10:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:32.736 10:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:32.736 10:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:32.736 10:21:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:32.736 00:10:32.736 real 0m4.646s 00:10:32.736 user 0m5.520s 00:10:32.736 sys 0m0.535s 00:10:32.736 10:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.736 10:21:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.736 ************************************ 00:10:32.736 END TEST raid_write_error_test 00:10:32.736 ************************************ 00:10:32.736 10:21:46 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:32.736 10:21:46 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:32.736 10:21:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:32.736 10:21:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.736 10:21:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:32.736 ************************************ 00:10:32.736 START TEST raid_state_function_test 00:10:32.736 ************************************ 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71057 00:10:32.736 Process raid pid: 71057 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71057' 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71057 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71057 ']' 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:32.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:32.736 10:21:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.736 [2024-11-19 10:21:46.319926] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:10:32.736 [2024-11-19 10:21:46.320632] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.736 [2024-11-19 10:21:46.512476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.998 [2024-11-19 10:21:46.628370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.259 [2024-11-19 10:21:46.830488] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.259 [2024-11-19 10:21:46.830523] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.518 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:33.518 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:33.518 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:33.518 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.518 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.518 [2024-11-19 10:21:47.149060] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:33.518 [2024-11-19 10:21:47.149133] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:33.518 [2024-11-19 10:21:47.149145] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:33.518 [2024-11-19 10:21:47.149155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:33.518 [2024-11-19 10:21:47.149166] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:33.518 [2024-11-19 10:21:47.149175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:33.518 [2024-11-19 10:21:47.149182] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:33.518 [2024-11-19 10:21:47.149192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:33.518 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.518 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:33.518 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.518 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.518 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:33.518 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.518 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.518 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.518 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.518 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.518 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.518 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.518 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.518 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.518 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.518 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.518 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.518 "name": "Existed_Raid", 00:10:33.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.518 "strip_size_kb": 64, 00:10:33.518 "state": "configuring", 00:10:33.518 "raid_level": "concat", 00:10:33.518 "superblock": false, 00:10:33.518 "num_base_bdevs": 4, 00:10:33.518 "num_base_bdevs_discovered": 0, 00:10:33.518 "num_base_bdevs_operational": 4, 00:10:33.518 "base_bdevs_list": [ 00:10:33.518 { 00:10:33.518 "name": "BaseBdev1", 00:10:33.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.518 "is_configured": false, 00:10:33.518 "data_offset": 0, 00:10:33.518 "data_size": 0 00:10:33.518 }, 00:10:33.518 { 00:10:33.518 "name": "BaseBdev2", 00:10:33.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.518 "is_configured": false, 00:10:33.518 "data_offset": 0, 00:10:33.518 "data_size": 0 00:10:33.518 }, 00:10:33.518 { 00:10:33.518 "name": "BaseBdev3", 00:10:33.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.518 "is_configured": false, 00:10:33.518 "data_offset": 0, 00:10:33.518 "data_size": 0 00:10:33.518 }, 00:10:33.518 { 00:10:33.518 "name": "BaseBdev4", 00:10:33.518 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.518 "is_configured": false, 00:10:33.518 "data_offset": 0, 00:10:33.518 "data_size": 0 00:10:33.518 } 00:10:33.518 ] 00:10:33.518 }' 00:10:33.518 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.518 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.087 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:34.087 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.087 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.087 [2024-11-19 10:21:47.604194] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:34.087 [2024-11-19 10:21:47.604237] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:34.087 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.087 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.088 [2024-11-19 10:21:47.616167] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:34.088 [2024-11-19 10:21:47.616213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:34.088 [2024-11-19 10:21:47.616223] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:34.088 [2024-11-19 10:21:47.616232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:34.088 [2024-11-19 10:21:47.616238] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:34.088 [2024-11-19 10:21:47.616247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:34.088 [2024-11-19 10:21:47.616254] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:34.088 [2024-11-19 10:21:47.616262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.088 [2024-11-19 10:21:47.664129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:34.088 BaseBdev1 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.088 [ 00:10:34.088 { 00:10:34.088 "name": "BaseBdev1", 00:10:34.088 "aliases": [ 00:10:34.088 "5a834b91-4dd5-4a16-a7f0-0e0041065ce0" 00:10:34.088 ], 00:10:34.088 "product_name": "Malloc disk", 00:10:34.088 "block_size": 512, 00:10:34.088 "num_blocks": 65536, 00:10:34.088 "uuid": "5a834b91-4dd5-4a16-a7f0-0e0041065ce0", 00:10:34.088 "assigned_rate_limits": { 00:10:34.088 "rw_ios_per_sec": 0, 00:10:34.088 "rw_mbytes_per_sec": 0, 00:10:34.088 "r_mbytes_per_sec": 0, 00:10:34.088 "w_mbytes_per_sec": 0 00:10:34.088 }, 00:10:34.088 "claimed": true, 00:10:34.088 "claim_type": "exclusive_write", 00:10:34.088 "zoned": false, 00:10:34.088 "supported_io_types": { 00:10:34.088 "read": true, 00:10:34.088 "write": true, 00:10:34.088 "unmap": true, 00:10:34.088 "flush": true, 00:10:34.088 "reset": true, 00:10:34.088 "nvme_admin": false, 00:10:34.088 "nvme_io": false, 00:10:34.088 "nvme_io_md": false, 00:10:34.088 "write_zeroes": true, 00:10:34.088 "zcopy": true, 00:10:34.088 "get_zone_info": false, 00:10:34.088 "zone_management": false, 00:10:34.088 "zone_append": false, 00:10:34.088 "compare": false, 00:10:34.088 "compare_and_write": false, 00:10:34.088 "abort": true, 00:10:34.088 "seek_hole": false, 00:10:34.088 "seek_data": false, 00:10:34.088 "copy": true, 00:10:34.088 "nvme_iov_md": false 00:10:34.088 }, 00:10:34.088 "memory_domains": [ 00:10:34.088 { 00:10:34.088 "dma_device_id": "system", 00:10:34.088 "dma_device_type": 1 00:10:34.088 }, 00:10:34.088 { 00:10:34.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.088 "dma_device_type": 2 00:10:34.088 } 00:10:34.088 ], 00:10:34.088 "driver_specific": {} 00:10:34.088 } 00:10:34.088 ] 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.088 "name": "Existed_Raid", 00:10:34.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.088 "strip_size_kb": 64, 00:10:34.088 "state": "configuring", 00:10:34.088 "raid_level": "concat", 00:10:34.088 "superblock": false, 00:10:34.088 "num_base_bdevs": 4, 00:10:34.088 "num_base_bdevs_discovered": 1, 00:10:34.088 "num_base_bdevs_operational": 4, 00:10:34.088 "base_bdevs_list": [ 00:10:34.088 { 00:10:34.088 "name": "BaseBdev1", 00:10:34.088 "uuid": "5a834b91-4dd5-4a16-a7f0-0e0041065ce0", 00:10:34.088 "is_configured": true, 00:10:34.088 "data_offset": 0, 00:10:34.088 "data_size": 65536 00:10:34.088 }, 00:10:34.088 { 00:10:34.088 "name": "BaseBdev2", 00:10:34.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.088 "is_configured": false, 00:10:34.088 "data_offset": 0, 00:10:34.088 "data_size": 0 00:10:34.088 }, 00:10:34.088 { 00:10:34.088 "name": "BaseBdev3", 00:10:34.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.088 "is_configured": false, 00:10:34.088 "data_offset": 0, 00:10:34.088 "data_size": 0 00:10:34.088 }, 00:10:34.088 { 00:10:34.088 "name": "BaseBdev4", 00:10:34.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.088 "is_configured": false, 00:10:34.088 "data_offset": 0, 00:10:34.088 "data_size": 0 00:10:34.088 } 00:10:34.088 ] 00:10:34.088 }' 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.088 10:21:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.347 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:34.347 10:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.348 10:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.348 [2024-11-19 10:21:48.115387] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:34.348 [2024-11-19 10:21:48.115443] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:34.348 10:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.348 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:34.348 10:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.348 10:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.348 [2024-11-19 10:21:48.123429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:34.348 [2024-11-19 10:21:48.125255] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:34.348 [2024-11-19 10:21:48.125295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:34.348 [2024-11-19 10:21:48.125321] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:34.348 [2024-11-19 10:21:48.125332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:34.348 [2024-11-19 10:21:48.125339] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:34.348 [2024-11-19 10:21:48.125348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:34.606 10:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.606 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:34.606 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:34.606 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:34.606 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.606 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.606 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:34.606 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.606 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.606 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.606 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.606 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.606 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.606 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.606 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.606 10:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.606 10:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.606 10:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.606 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.607 "name": "Existed_Raid", 00:10:34.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.607 "strip_size_kb": 64, 00:10:34.607 "state": "configuring", 00:10:34.607 "raid_level": "concat", 00:10:34.607 "superblock": false, 00:10:34.607 "num_base_bdevs": 4, 00:10:34.607 "num_base_bdevs_discovered": 1, 00:10:34.607 "num_base_bdevs_operational": 4, 00:10:34.607 "base_bdevs_list": [ 00:10:34.607 { 00:10:34.607 "name": "BaseBdev1", 00:10:34.607 "uuid": "5a834b91-4dd5-4a16-a7f0-0e0041065ce0", 00:10:34.607 "is_configured": true, 00:10:34.607 "data_offset": 0, 00:10:34.607 "data_size": 65536 00:10:34.607 }, 00:10:34.607 { 00:10:34.607 "name": "BaseBdev2", 00:10:34.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.607 "is_configured": false, 00:10:34.607 "data_offset": 0, 00:10:34.607 "data_size": 0 00:10:34.607 }, 00:10:34.607 { 00:10:34.607 "name": "BaseBdev3", 00:10:34.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.607 "is_configured": false, 00:10:34.607 "data_offset": 0, 00:10:34.607 "data_size": 0 00:10:34.607 }, 00:10:34.607 { 00:10:34.607 "name": "BaseBdev4", 00:10:34.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.607 "is_configured": false, 00:10:34.607 "data_offset": 0, 00:10:34.607 "data_size": 0 00:10:34.607 } 00:10:34.607 ] 00:10:34.607 }' 00:10:34.607 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.607 10:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.866 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:34.866 10:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.866 10:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.866 [2024-11-19 10:21:48.543931] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:34.866 BaseBdev2 00:10:34.866 10:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.866 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:34.866 10:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:34.866 10:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:34.866 10:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:34.866 10:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:34.866 10:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:34.866 10:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:34.866 10:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.866 10:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.866 10:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.866 10:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:34.866 10:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.866 10:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.866 [ 00:10:34.866 { 00:10:34.866 "name": "BaseBdev2", 00:10:34.866 "aliases": [ 00:10:34.866 "98e7eea1-05ab-4cf0-a5bc-595a975d604f" 00:10:34.866 ], 00:10:34.866 "product_name": "Malloc disk", 00:10:34.866 "block_size": 512, 00:10:34.866 "num_blocks": 65536, 00:10:34.866 "uuid": "98e7eea1-05ab-4cf0-a5bc-595a975d604f", 00:10:34.866 "assigned_rate_limits": { 00:10:34.866 "rw_ios_per_sec": 0, 00:10:34.866 "rw_mbytes_per_sec": 0, 00:10:34.866 "r_mbytes_per_sec": 0, 00:10:34.866 "w_mbytes_per_sec": 0 00:10:34.866 }, 00:10:34.866 "claimed": true, 00:10:34.866 "claim_type": "exclusive_write", 00:10:34.866 "zoned": false, 00:10:34.866 "supported_io_types": { 00:10:34.866 "read": true, 00:10:34.866 "write": true, 00:10:34.866 "unmap": true, 00:10:34.866 "flush": true, 00:10:34.866 "reset": true, 00:10:34.866 "nvme_admin": false, 00:10:34.866 "nvme_io": false, 00:10:34.866 "nvme_io_md": false, 00:10:34.866 "write_zeroes": true, 00:10:34.866 "zcopy": true, 00:10:34.866 "get_zone_info": false, 00:10:34.866 "zone_management": false, 00:10:34.866 "zone_append": false, 00:10:34.866 "compare": false, 00:10:34.866 "compare_and_write": false, 00:10:34.866 "abort": true, 00:10:34.866 "seek_hole": false, 00:10:34.866 "seek_data": false, 00:10:34.866 "copy": true, 00:10:34.866 "nvme_iov_md": false 00:10:34.866 }, 00:10:34.866 "memory_domains": [ 00:10:34.866 { 00:10:34.866 "dma_device_id": "system", 00:10:34.866 "dma_device_type": 1 00:10:34.866 }, 00:10:34.866 { 00:10:34.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.866 "dma_device_type": 2 00:10:34.867 } 00:10:34.867 ], 00:10:34.867 "driver_specific": {} 00:10:34.867 } 00:10:34.867 ] 00:10:34.867 10:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.867 10:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:34.867 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:34.867 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:34.867 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:34.867 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.867 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.867 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:34.867 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.867 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.867 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.867 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.867 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.867 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.867 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.867 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.867 10:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.867 10:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.867 10:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.867 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.867 "name": "Existed_Raid", 00:10:34.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.867 "strip_size_kb": 64, 00:10:34.867 "state": "configuring", 00:10:34.867 "raid_level": "concat", 00:10:34.867 "superblock": false, 00:10:34.867 "num_base_bdevs": 4, 00:10:34.867 "num_base_bdevs_discovered": 2, 00:10:34.867 "num_base_bdevs_operational": 4, 00:10:34.867 "base_bdevs_list": [ 00:10:34.867 { 00:10:34.867 "name": "BaseBdev1", 00:10:34.867 "uuid": "5a834b91-4dd5-4a16-a7f0-0e0041065ce0", 00:10:34.867 "is_configured": true, 00:10:34.867 "data_offset": 0, 00:10:34.867 "data_size": 65536 00:10:34.867 }, 00:10:34.867 { 00:10:34.867 "name": "BaseBdev2", 00:10:34.867 "uuid": "98e7eea1-05ab-4cf0-a5bc-595a975d604f", 00:10:34.867 "is_configured": true, 00:10:34.867 "data_offset": 0, 00:10:34.867 "data_size": 65536 00:10:34.867 }, 00:10:34.867 { 00:10:34.867 "name": "BaseBdev3", 00:10:34.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.867 "is_configured": false, 00:10:34.867 "data_offset": 0, 00:10:34.867 "data_size": 0 00:10:34.867 }, 00:10:34.867 { 00:10:34.867 "name": "BaseBdev4", 00:10:34.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.867 "is_configured": false, 00:10:34.867 "data_offset": 0, 00:10:34.867 "data_size": 0 00:10:34.867 } 00:10:34.867 ] 00:10:34.867 }' 00:10:34.867 10:21:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.867 10:21:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.436 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:35.436 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.436 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.436 [2024-11-19 10:21:49.082015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:35.436 BaseBdev3 00:10:35.436 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.436 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:35.436 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:35.436 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:35.436 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:35.436 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:35.436 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:35.436 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:35.436 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.436 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.436 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.436 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:35.436 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.436 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.436 [ 00:10:35.436 { 00:10:35.436 "name": "BaseBdev3", 00:10:35.436 "aliases": [ 00:10:35.436 "799f4358-8f68-44b5-a0a5-d18df4c42d56" 00:10:35.436 ], 00:10:35.436 "product_name": "Malloc disk", 00:10:35.436 "block_size": 512, 00:10:35.436 "num_blocks": 65536, 00:10:35.436 "uuid": "799f4358-8f68-44b5-a0a5-d18df4c42d56", 00:10:35.436 "assigned_rate_limits": { 00:10:35.436 "rw_ios_per_sec": 0, 00:10:35.436 "rw_mbytes_per_sec": 0, 00:10:35.436 "r_mbytes_per_sec": 0, 00:10:35.436 "w_mbytes_per_sec": 0 00:10:35.436 }, 00:10:35.436 "claimed": true, 00:10:35.436 "claim_type": "exclusive_write", 00:10:35.436 "zoned": false, 00:10:35.436 "supported_io_types": { 00:10:35.436 "read": true, 00:10:35.436 "write": true, 00:10:35.436 "unmap": true, 00:10:35.436 "flush": true, 00:10:35.436 "reset": true, 00:10:35.436 "nvme_admin": false, 00:10:35.436 "nvme_io": false, 00:10:35.436 "nvme_io_md": false, 00:10:35.436 "write_zeroes": true, 00:10:35.436 "zcopy": true, 00:10:35.436 "get_zone_info": false, 00:10:35.436 "zone_management": false, 00:10:35.436 "zone_append": false, 00:10:35.436 "compare": false, 00:10:35.436 "compare_and_write": false, 00:10:35.436 "abort": true, 00:10:35.436 "seek_hole": false, 00:10:35.436 "seek_data": false, 00:10:35.436 "copy": true, 00:10:35.436 "nvme_iov_md": false 00:10:35.436 }, 00:10:35.436 "memory_domains": [ 00:10:35.436 { 00:10:35.436 "dma_device_id": "system", 00:10:35.436 "dma_device_type": 1 00:10:35.436 }, 00:10:35.436 { 00:10:35.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.436 "dma_device_type": 2 00:10:35.436 } 00:10:35.436 ], 00:10:35.437 "driver_specific": {} 00:10:35.437 } 00:10:35.437 ] 00:10:35.437 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.437 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:35.437 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:35.437 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:35.437 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:35.437 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.437 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.437 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:35.437 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.437 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.437 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.437 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.437 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.437 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.437 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.437 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.437 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.437 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.437 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.437 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.437 "name": "Existed_Raid", 00:10:35.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.437 "strip_size_kb": 64, 00:10:35.437 "state": "configuring", 00:10:35.437 "raid_level": "concat", 00:10:35.437 "superblock": false, 00:10:35.437 "num_base_bdevs": 4, 00:10:35.437 "num_base_bdevs_discovered": 3, 00:10:35.437 "num_base_bdevs_operational": 4, 00:10:35.437 "base_bdevs_list": [ 00:10:35.437 { 00:10:35.437 "name": "BaseBdev1", 00:10:35.437 "uuid": "5a834b91-4dd5-4a16-a7f0-0e0041065ce0", 00:10:35.437 "is_configured": true, 00:10:35.437 "data_offset": 0, 00:10:35.437 "data_size": 65536 00:10:35.437 }, 00:10:35.437 { 00:10:35.437 "name": "BaseBdev2", 00:10:35.437 "uuid": "98e7eea1-05ab-4cf0-a5bc-595a975d604f", 00:10:35.437 "is_configured": true, 00:10:35.437 "data_offset": 0, 00:10:35.437 "data_size": 65536 00:10:35.437 }, 00:10:35.437 { 00:10:35.437 "name": "BaseBdev3", 00:10:35.437 "uuid": "799f4358-8f68-44b5-a0a5-d18df4c42d56", 00:10:35.437 "is_configured": true, 00:10:35.437 "data_offset": 0, 00:10:35.437 "data_size": 65536 00:10:35.437 }, 00:10:35.437 { 00:10:35.437 "name": "BaseBdev4", 00:10:35.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.437 "is_configured": false, 00:10:35.437 "data_offset": 0, 00:10:35.437 "data_size": 0 00:10:35.437 } 00:10:35.437 ] 00:10:35.437 }' 00:10:35.437 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.437 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.004 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:36.004 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.004 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.004 [2024-11-19 10:21:49.583594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:36.004 [2024-11-19 10:21:49.583651] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:36.004 [2024-11-19 10:21:49.583660] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:36.004 [2024-11-19 10:21:49.583928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:36.004 [2024-11-19 10:21:49.584122] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:36.004 [2024-11-19 10:21:49.584145] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:36.004 [2024-11-19 10:21:49.584404] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.004 BaseBdev4 00:10:36.004 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.005 [ 00:10:36.005 { 00:10:36.005 "name": "BaseBdev4", 00:10:36.005 "aliases": [ 00:10:36.005 "f51e989e-6ad8-4cfd-a3c3-c37d7649a9bd" 00:10:36.005 ], 00:10:36.005 "product_name": "Malloc disk", 00:10:36.005 "block_size": 512, 00:10:36.005 "num_blocks": 65536, 00:10:36.005 "uuid": "f51e989e-6ad8-4cfd-a3c3-c37d7649a9bd", 00:10:36.005 "assigned_rate_limits": { 00:10:36.005 "rw_ios_per_sec": 0, 00:10:36.005 "rw_mbytes_per_sec": 0, 00:10:36.005 "r_mbytes_per_sec": 0, 00:10:36.005 "w_mbytes_per_sec": 0 00:10:36.005 }, 00:10:36.005 "claimed": true, 00:10:36.005 "claim_type": "exclusive_write", 00:10:36.005 "zoned": false, 00:10:36.005 "supported_io_types": { 00:10:36.005 "read": true, 00:10:36.005 "write": true, 00:10:36.005 "unmap": true, 00:10:36.005 "flush": true, 00:10:36.005 "reset": true, 00:10:36.005 "nvme_admin": false, 00:10:36.005 "nvme_io": false, 00:10:36.005 "nvme_io_md": false, 00:10:36.005 "write_zeroes": true, 00:10:36.005 "zcopy": true, 00:10:36.005 "get_zone_info": false, 00:10:36.005 "zone_management": false, 00:10:36.005 "zone_append": false, 00:10:36.005 "compare": false, 00:10:36.005 "compare_and_write": false, 00:10:36.005 "abort": true, 00:10:36.005 "seek_hole": false, 00:10:36.005 "seek_data": false, 00:10:36.005 "copy": true, 00:10:36.005 "nvme_iov_md": false 00:10:36.005 }, 00:10:36.005 "memory_domains": [ 00:10:36.005 { 00:10:36.005 "dma_device_id": "system", 00:10:36.005 "dma_device_type": 1 00:10:36.005 }, 00:10:36.005 { 00:10:36.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.005 "dma_device_type": 2 00:10:36.005 } 00:10:36.005 ], 00:10:36.005 "driver_specific": {} 00:10:36.005 } 00:10:36.005 ] 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.005 "name": "Existed_Raid", 00:10:36.005 "uuid": "bd583cc7-02c6-473b-8c83-1e5d2f731285", 00:10:36.005 "strip_size_kb": 64, 00:10:36.005 "state": "online", 00:10:36.005 "raid_level": "concat", 00:10:36.005 "superblock": false, 00:10:36.005 "num_base_bdevs": 4, 00:10:36.005 "num_base_bdevs_discovered": 4, 00:10:36.005 "num_base_bdevs_operational": 4, 00:10:36.005 "base_bdevs_list": [ 00:10:36.005 { 00:10:36.005 "name": "BaseBdev1", 00:10:36.005 "uuid": "5a834b91-4dd5-4a16-a7f0-0e0041065ce0", 00:10:36.005 "is_configured": true, 00:10:36.005 "data_offset": 0, 00:10:36.005 "data_size": 65536 00:10:36.005 }, 00:10:36.005 { 00:10:36.005 "name": "BaseBdev2", 00:10:36.005 "uuid": "98e7eea1-05ab-4cf0-a5bc-595a975d604f", 00:10:36.005 "is_configured": true, 00:10:36.005 "data_offset": 0, 00:10:36.005 "data_size": 65536 00:10:36.005 }, 00:10:36.005 { 00:10:36.005 "name": "BaseBdev3", 00:10:36.005 "uuid": "799f4358-8f68-44b5-a0a5-d18df4c42d56", 00:10:36.005 "is_configured": true, 00:10:36.005 "data_offset": 0, 00:10:36.005 "data_size": 65536 00:10:36.005 }, 00:10:36.005 { 00:10:36.005 "name": "BaseBdev4", 00:10:36.005 "uuid": "f51e989e-6ad8-4cfd-a3c3-c37d7649a9bd", 00:10:36.005 "is_configured": true, 00:10:36.005 "data_offset": 0, 00:10:36.005 "data_size": 65536 00:10:36.005 } 00:10:36.005 ] 00:10:36.005 }' 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.005 10:21:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.576 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:36.576 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:36.576 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:36.576 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:36.576 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:36.576 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:36.576 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:36.576 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:36.576 10:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.576 10:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.576 [2024-11-19 10:21:50.087202] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:36.576 10:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.576 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:36.576 "name": "Existed_Raid", 00:10:36.576 "aliases": [ 00:10:36.576 "bd583cc7-02c6-473b-8c83-1e5d2f731285" 00:10:36.576 ], 00:10:36.576 "product_name": "Raid Volume", 00:10:36.576 "block_size": 512, 00:10:36.576 "num_blocks": 262144, 00:10:36.576 "uuid": "bd583cc7-02c6-473b-8c83-1e5d2f731285", 00:10:36.576 "assigned_rate_limits": { 00:10:36.576 "rw_ios_per_sec": 0, 00:10:36.576 "rw_mbytes_per_sec": 0, 00:10:36.576 "r_mbytes_per_sec": 0, 00:10:36.576 "w_mbytes_per_sec": 0 00:10:36.576 }, 00:10:36.576 "claimed": false, 00:10:36.576 "zoned": false, 00:10:36.576 "supported_io_types": { 00:10:36.576 "read": true, 00:10:36.576 "write": true, 00:10:36.576 "unmap": true, 00:10:36.576 "flush": true, 00:10:36.576 "reset": true, 00:10:36.576 "nvme_admin": false, 00:10:36.576 "nvme_io": false, 00:10:36.576 "nvme_io_md": false, 00:10:36.576 "write_zeroes": true, 00:10:36.576 "zcopy": false, 00:10:36.576 "get_zone_info": false, 00:10:36.576 "zone_management": false, 00:10:36.576 "zone_append": false, 00:10:36.576 "compare": false, 00:10:36.576 "compare_and_write": false, 00:10:36.576 "abort": false, 00:10:36.576 "seek_hole": false, 00:10:36.576 "seek_data": false, 00:10:36.576 "copy": false, 00:10:36.576 "nvme_iov_md": false 00:10:36.576 }, 00:10:36.576 "memory_domains": [ 00:10:36.576 { 00:10:36.576 "dma_device_id": "system", 00:10:36.576 "dma_device_type": 1 00:10:36.576 }, 00:10:36.576 { 00:10:36.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.576 "dma_device_type": 2 00:10:36.576 }, 00:10:36.576 { 00:10:36.576 "dma_device_id": "system", 00:10:36.576 "dma_device_type": 1 00:10:36.576 }, 00:10:36.576 { 00:10:36.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.576 "dma_device_type": 2 00:10:36.576 }, 00:10:36.576 { 00:10:36.576 "dma_device_id": "system", 00:10:36.576 "dma_device_type": 1 00:10:36.576 }, 00:10:36.576 { 00:10:36.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.576 "dma_device_type": 2 00:10:36.576 }, 00:10:36.576 { 00:10:36.576 "dma_device_id": "system", 00:10:36.576 "dma_device_type": 1 00:10:36.576 }, 00:10:36.576 { 00:10:36.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.576 "dma_device_type": 2 00:10:36.576 } 00:10:36.576 ], 00:10:36.576 "driver_specific": { 00:10:36.576 "raid": { 00:10:36.576 "uuid": "bd583cc7-02c6-473b-8c83-1e5d2f731285", 00:10:36.576 "strip_size_kb": 64, 00:10:36.576 "state": "online", 00:10:36.576 "raid_level": "concat", 00:10:36.576 "superblock": false, 00:10:36.576 "num_base_bdevs": 4, 00:10:36.576 "num_base_bdevs_discovered": 4, 00:10:36.576 "num_base_bdevs_operational": 4, 00:10:36.576 "base_bdevs_list": [ 00:10:36.576 { 00:10:36.576 "name": "BaseBdev1", 00:10:36.576 "uuid": "5a834b91-4dd5-4a16-a7f0-0e0041065ce0", 00:10:36.576 "is_configured": true, 00:10:36.576 "data_offset": 0, 00:10:36.576 "data_size": 65536 00:10:36.576 }, 00:10:36.576 { 00:10:36.576 "name": "BaseBdev2", 00:10:36.576 "uuid": "98e7eea1-05ab-4cf0-a5bc-595a975d604f", 00:10:36.576 "is_configured": true, 00:10:36.576 "data_offset": 0, 00:10:36.576 "data_size": 65536 00:10:36.576 }, 00:10:36.576 { 00:10:36.576 "name": "BaseBdev3", 00:10:36.576 "uuid": "799f4358-8f68-44b5-a0a5-d18df4c42d56", 00:10:36.576 "is_configured": true, 00:10:36.576 "data_offset": 0, 00:10:36.576 "data_size": 65536 00:10:36.576 }, 00:10:36.576 { 00:10:36.576 "name": "BaseBdev4", 00:10:36.576 "uuid": "f51e989e-6ad8-4cfd-a3c3-c37d7649a9bd", 00:10:36.576 "is_configured": true, 00:10:36.576 "data_offset": 0, 00:10:36.576 "data_size": 65536 00:10:36.576 } 00:10:36.576 ] 00:10:36.576 } 00:10:36.576 } 00:10:36.576 }' 00:10:36.576 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:36.576 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:36.576 BaseBdev2 00:10:36.576 BaseBdev3 00:10:36.576 BaseBdev4' 00:10:36.576 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.576 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:36.576 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.576 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.576 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:36.576 10:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.576 10:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.576 10:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.576 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.576 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.576 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.576 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:36.577 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.577 10:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.577 10:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.577 10:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.577 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.577 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.577 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.577 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:36.577 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.577 10:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.577 10:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.577 10:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.577 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.577 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.577 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.577 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.577 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:36.577 10:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.577 10:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.577 10:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.836 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.836 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.836 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:36.836 10:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.836 10:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.837 [2024-11-19 10:21:50.362388] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:36.837 [2024-11-19 10:21:50.362421] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:36.837 [2024-11-19 10:21:50.362471] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:36.837 10:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.837 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:36.837 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:36.837 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:36.837 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:36.837 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:36.837 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:36.837 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.837 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:36.837 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.837 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.837 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.837 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.837 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.837 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.837 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.837 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.837 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.837 10:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.837 10:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.837 10:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.837 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.837 "name": "Existed_Raid", 00:10:36.837 "uuid": "bd583cc7-02c6-473b-8c83-1e5d2f731285", 00:10:36.837 "strip_size_kb": 64, 00:10:36.837 "state": "offline", 00:10:36.837 "raid_level": "concat", 00:10:36.837 "superblock": false, 00:10:36.837 "num_base_bdevs": 4, 00:10:36.837 "num_base_bdevs_discovered": 3, 00:10:36.837 "num_base_bdevs_operational": 3, 00:10:36.837 "base_bdevs_list": [ 00:10:36.837 { 00:10:36.837 "name": null, 00:10:36.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.837 "is_configured": false, 00:10:36.837 "data_offset": 0, 00:10:36.837 "data_size": 65536 00:10:36.837 }, 00:10:36.837 { 00:10:36.837 "name": "BaseBdev2", 00:10:36.837 "uuid": "98e7eea1-05ab-4cf0-a5bc-595a975d604f", 00:10:36.837 "is_configured": true, 00:10:36.837 "data_offset": 0, 00:10:36.837 "data_size": 65536 00:10:36.837 }, 00:10:36.837 { 00:10:36.837 "name": "BaseBdev3", 00:10:36.837 "uuid": "799f4358-8f68-44b5-a0a5-d18df4c42d56", 00:10:36.837 "is_configured": true, 00:10:36.837 "data_offset": 0, 00:10:36.837 "data_size": 65536 00:10:36.837 }, 00:10:36.837 { 00:10:36.837 "name": "BaseBdev4", 00:10:36.837 "uuid": "f51e989e-6ad8-4cfd-a3c3-c37d7649a9bd", 00:10:36.837 "is_configured": true, 00:10:36.837 "data_offset": 0, 00:10:36.837 "data_size": 65536 00:10:36.837 } 00:10:36.837 ] 00:10:36.837 }' 00:10:36.837 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.837 10:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.407 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:37.407 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:37.407 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:37.407 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.407 10:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.407 10:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.407 10:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.407 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:37.407 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:37.407 10:21:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:37.407 10:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.407 10:21:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.407 [2024-11-19 10:21:50.946061] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:37.407 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.407 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:37.407 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:37.407 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.407 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:37.407 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.407 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.407 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.407 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:37.408 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:37.408 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:37.408 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.408 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.408 [2024-11-19 10:21:51.098060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.667 [2024-11-19 10:21:51.258854] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:37.667 [2024-11-19 10:21:51.258907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.667 BaseBdev2 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.667 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.927 [ 00:10:37.927 { 00:10:37.927 "name": "BaseBdev2", 00:10:37.927 "aliases": [ 00:10:37.927 "9dd3c707-dcd9-43ef-9399-3b46285775de" 00:10:37.927 ], 00:10:37.927 "product_name": "Malloc disk", 00:10:37.927 "block_size": 512, 00:10:37.927 "num_blocks": 65536, 00:10:37.927 "uuid": "9dd3c707-dcd9-43ef-9399-3b46285775de", 00:10:37.927 "assigned_rate_limits": { 00:10:37.927 "rw_ios_per_sec": 0, 00:10:37.927 "rw_mbytes_per_sec": 0, 00:10:37.927 "r_mbytes_per_sec": 0, 00:10:37.927 "w_mbytes_per_sec": 0 00:10:37.927 }, 00:10:37.927 "claimed": false, 00:10:37.927 "zoned": false, 00:10:37.927 "supported_io_types": { 00:10:37.927 "read": true, 00:10:37.927 "write": true, 00:10:37.927 "unmap": true, 00:10:37.927 "flush": true, 00:10:37.927 "reset": true, 00:10:37.927 "nvme_admin": false, 00:10:37.927 "nvme_io": false, 00:10:37.927 "nvme_io_md": false, 00:10:37.927 "write_zeroes": true, 00:10:37.927 "zcopy": true, 00:10:37.927 "get_zone_info": false, 00:10:37.927 "zone_management": false, 00:10:37.927 "zone_append": false, 00:10:37.927 "compare": false, 00:10:37.927 "compare_and_write": false, 00:10:37.927 "abort": true, 00:10:37.927 "seek_hole": false, 00:10:37.927 "seek_data": false, 00:10:37.927 "copy": true, 00:10:37.927 "nvme_iov_md": false 00:10:37.927 }, 00:10:37.927 "memory_domains": [ 00:10:37.927 { 00:10:37.927 "dma_device_id": "system", 00:10:37.927 "dma_device_type": 1 00:10:37.927 }, 00:10:37.927 { 00:10:37.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.927 "dma_device_type": 2 00:10:37.927 } 00:10:37.927 ], 00:10:37.927 "driver_specific": {} 00:10:37.927 } 00:10:37.927 ] 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.927 BaseBdev3 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.927 [ 00:10:37.927 { 00:10:37.927 "name": "BaseBdev3", 00:10:37.927 "aliases": [ 00:10:37.927 "2ce68f74-a04c-4422-81c5-224cd2b4f459" 00:10:37.927 ], 00:10:37.927 "product_name": "Malloc disk", 00:10:37.927 "block_size": 512, 00:10:37.927 "num_blocks": 65536, 00:10:37.927 "uuid": "2ce68f74-a04c-4422-81c5-224cd2b4f459", 00:10:37.927 "assigned_rate_limits": { 00:10:37.927 "rw_ios_per_sec": 0, 00:10:37.927 "rw_mbytes_per_sec": 0, 00:10:37.927 "r_mbytes_per_sec": 0, 00:10:37.927 "w_mbytes_per_sec": 0 00:10:37.927 }, 00:10:37.927 "claimed": false, 00:10:37.927 "zoned": false, 00:10:37.927 "supported_io_types": { 00:10:37.927 "read": true, 00:10:37.927 "write": true, 00:10:37.927 "unmap": true, 00:10:37.927 "flush": true, 00:10:37.927 "reset": true, 00:10:37.927 "nvme_admin": false, 00:10:37.927 "nvme_io": false, 00:10:37.927 "nvme_io_md": false, 00:10:37.927 "write_zeroes": true, 00:10:37.927 "zcopy": true, 00:10:37.927 "get_zone_info": false, 00:10:37.927 "zone_management": false, 00:10:37.927 "zone_append": false, 00:10:37.927 "compare": false, 00:10:37.927 "compare_and_write": false, 00:10:37.927 "abort": true, 00:10:37.927 "seek_hole": false, 00:10:37.927 "seek_data": false, 00:10:37.927 "copy": true, 00:10:37.927 "nvme_iov_md": false 00:10:37.927 }, 00:10:37.927 "memory_domains": [ 00:10:37.927 { 00:10:37.927 "dma_device_id": "system", 00:10:37.927 "dma_device_type": 1 00:10:37.927 }, 00:10:37.927 { 00:10:37.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.927 "dma_device_type": 2 00:10:37.927 } 00:10:37.927 ], 00:10:37.927 "driver_specific": {} 00:10:37.927 } 00:10:37.927 ] 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.927 BaseBdev4 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.927 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.927 [ 00:10:37.927 { 00:10:37.927 "name": "BaseBdev4", 00:10:37.927 "aliases": [ 00:10:37.927 "cca84588-9761-4874-a3d2-6357830b8352" 00:10:37.927 ], 00:10:37.927 "product_name": "Malloc disk", 00:10:37.927 "block_size": 512, 00:10:37.927 "num_blocks": 65536, 00:10:37.927 "uuid": "cca84588-9761-4874-a3d2-6357830b8352", 00:10:37.927 "assigned_rate_limits": { 00:10:37.927 "rw_ios_per_sec": 0, 00:10:37.927 "rw_mbytes_per_sec": 0, 00:10:37.927 "r_mbytes_per_sec": 0, 00:10:37.927 "w_mbytes_per_sec": 0 00:10:37.927 }, 00:10:37.927 "claimed": false, 00:10:37.927 "zoned": false, 00:10:37.927 "supported_io_types": { 00:10:37.927 "read": true, 00:10:37.927 "write": true, 00:10:37.927 "unmap": true, 00:10:37.927 "flush": true, 00:10:37.927 "reset": true, 00:10:37.927 "nvme_admin": false, 00:10:37.927 "nvme_io": false, 00:10:37.927 "nvme_io_md": false, 00:10:37.927 "write_zeroes": true, 00:10:37.927 "zcopy": true, 00:10:37.927 "get_zone_info": false, 00:10:37.927 "zone_management": false, 00:10:37.927 "zone_append": false, 00:10:37.927 "compare": false, 00:10:37.927 "compare_and_write": false, 00:10:37.927 "abort": true, 00:10:37.927 "seek_hole": false, 00:10:37.927 "seek_data": false, 00:10:37.927 "copy": true, 00:10:37.927 "nvme_iov_md": false 00:10:37.927 }, 00:10:37.927 "memory_domains": [ 00:10:37.927 { 00:10:37.928 "dma_device_id": "system", 00:10:37.928 "dma_device_type": 1 00:10:37.928 }, 00:10:37.928 { 00:10:37.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.928 "dma_device_type": 2 00:10:37.928 } 00:10:37.928 ], 00:10:37.928 "driver_specific": {} 00:10:37.928 } 00:10:37.928 ] 00:10:37.928 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.928 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:37.928 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:37.928 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:37.928 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:37.928 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.928 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.928 [2024-11-19 10:21:51.596423] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:37.928 [2024-11-19 10:21:51.596471] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:37.928 [2024-11-19 10:21:51.596511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:37.928 [2024-11-19 10:21:51.598373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:37.928 [2024-11-19 10:21:51.598446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:37.928 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.928 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:37.928 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.928 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.928 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.928 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.928 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.928 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.928 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.928 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.928 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.928 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.928 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.928 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.928 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.928 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.928 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.928 "name": "Existed_Raid", 00:10:37.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.928 "strip_size_kb": 64, 00:10:37.928 "state": "configuring", 00:10:37.928 "raid_level": "concat", 00:10:37.928 "superblock": false, 00:10:37.928 "num_base_bdevs": 4, 00:10:37.928 "num_base_bdevs_discovered": 3, 00:10:37.928 "num_base_bdevs_operational": 4, 00:10:37.928 "base_bdevs_list": [ 00:10:37.928 { 00:10:37.928 "name": "BaseBdev1", 00:10:37.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.928 "is_configured": false, 00:10:37.928 "data_offset": 0, 00:10:37.928 "data_size": 0 00:10:37.928 }, 00:10:37.928 { 00:10:37.928 "name": "BaseBdev2", 00:10:37.928 "uuid": "9dd3c707-dcd9-43ef-9399-3b46285775de", 00:10:37.928 "is_configured": true, 00:10:37.928 "data_offset": 0, 00:10:37.928 "data_size": 65536 00:10:37.928 }, 00:10:37.928 { 00:10:37.928 "name": "BaseBdev3", 00:10:37.928 "uuid": "2ce68f74-a04c-4422-81c5-224cd2b4f459", 00:10:37.928 "is_configured": true, 00:10:37.928 "data_offset": 0, 00:10:37.928 "data_size": 65536 00:10:37.928 }, 00:10:37.928 { 00:10:37.928 "name": "BaseBdev4", 00:10:37.928 "uuid": "cca84588-9761-4874-a3d2-6357830b8352", 00:10:37.928 "is_configured": true, 00:10:37.928 "data_offset": 0, 00:10:37.928 "data_size": 65536 00:10:37.928 } 00:10:37.928 ] 00:10:37.928 }' 00:10:37.928 10:21:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.928 10:21:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.497 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:38.497 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.497 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.497 [2024-11-19 10:21:52.019720] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:38.497 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.497 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:38.497 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.497 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.497 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.497 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.497 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.497 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.497 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.497 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.497 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.497 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.497 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.497 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.497 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.497 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.497 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.497 "name": "Existed_Raid", 00:10:38.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.497 "strip_size_kb": 64, 00:10:38.497 "state": "configuring", 00:10:38.497 "raid_level": "concat", 00:10:38.497 "superblock": false, 00:10:38.497 "num_base_bdevs": 4, 00:10:38.497 "num_base_bdevs_discovered": 2, 00:10:38.497 "num_base_bdevs_operational": 4, 00:10:38.497 "base_bdevs_list": [ 00:10:38.497 { 00:10:38.497 "name": "BaseBdev1", 00:10:38.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.497 "is_configured": false, 00:10:38.497 "data_offset": 0, 00:10:38.497 "data_size": 0 00:10:38.497 }, 00:10:38.497 { 00:10:38.497 "name": null, 00:10:38.497 "uuid": "9dd3c707-dcd9-43ef-9399-3b46285775de", 00:10:38.497 "is_configured": false, 00:10:38.497 "data_offset": 0, 00:10:38.497 "data_size": 65536 00:10:38.497 }, 00:10:38.497 { 00:10:38.497 "name": "BaseBdev3", 00:10:38.497 "uuid": "2ce68f74-a04c-4422-81c5-224cd2b4f459", 00:10:38.497 "is_configured": true, 00:10:38.497 "data_offset": 0, 00:10:38.497 "data_size": 65536 00:10:38.497 }, 00:10:38.497 { 00:10:38.497 "name": "BaseBdev4", 00:10:38.497 "uuid": "cca84588-9761-4874-a3d2-6357830b8352", 00:10:38.497 "is_configured": true, 00:10:38.497 "data_offset": 0, 00:10:38.497 "data_size": 65536 00:10:38.497 } 00:10:38.497 ] 00:10:38.497 }' 00:10:38.497 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.497 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.756 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:38.756 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.756 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.756 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.756 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.756 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:38.756 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:38.756 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.756 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.756 [2024-11-19 10:21:52.479924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:38.756 BaseBdev1 00:10:38.756 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.756 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:38.756 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:38.756 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.756 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:38.756 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.756 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.756 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.756 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.756 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.756 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.756 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:38.756 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.756 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.756 [ 00:10:38.757 { 00:10:38.757 "name": "BaseBdev1", 00:10:38.757 "aliases": [ 00:10:38.757 "f0ceb764-4a8b-4cfe-9cae-a24511734295" 00:10:38.757 ], 00:10:38.757 "product_name": "Malloc disk", 00:10:38.757 "block_size": 512, 00:10:38.757 "num_blocks": 65536, 00:10:38.757 "uuid": "f0ceb764-4a8b-4cfe-9cae-a24511734295", 00:10:38.757 "assigned_rate_limits": { 00:10:38.757 "rw_ios_per_sec": 0, 00:10:38.757 "rw_mbytes_per_sec": 0, 00:10:38.757 "r_mbytes_per_sec": 0, 00:10:38.757 "w_mbytes_per_sec": 0 00:10:38.757 }, 00:10:38.757 "claimed": true, 00:10:38.757 "claim_type": "exclusive_write", 00:10:38.757 "zoned": false, 00:10:38.757 "supported_io_types": { 00:10:38.757 "read": true, 00:10:38.757 "write": true, 00:10:38.757 "unmap": true, 00:10:38.757 "flush": true, 00:10:38.757 "reset": true, 00:10:38.757 "nvme_admin": false, 00:10:38.757 "nvme_io": false, 00:10:38.757 "nvme_io_md": false, 00:10:38.757 "write_zeroes": true, 00:10:38.757 "zcopy": true, 00:10:38.757 "get_zone_info": false, 00:10:38.757 "zone_management": false, 00:10:38.757 "zone_append": false, 00:10:38.757 "compare": false, 00:10:38.757 "compare_and_write": false, 00:10:38.757 "abort": true, 00:10:38.757 "seek_hole": false, 00:10:38.757 "seek_data": false, 00:10:38.757 "copy": true, 00:10:38.757 "nvme_iov_md": false 00:10:38.757 }, 00:10:38.757 "memory_domains": [ 00:10:38.757 { 00:10:38.757 "dma_device_id": "system", 00:10:38.757 "dma_device_type": 1 00:10:38.757 }, 00:10:38.757 { 00:10:38.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.757 "dma_device_type": 2 00:10:38.757 } 00:10:38.757 ], 00:10:38.757 "driver_specific": {} 00:10:38.757 } 00:10:38.757 ] 00:10:38.757 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.757 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:38.757 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:38.757 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.757 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.757 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.757 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.757 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.757 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.757 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.757 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.757 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.757 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.757 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.757 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.757 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.757 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.055 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.055 "name": "Existed_Raid", 00:10:39.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.055 "strip_size_kb": 64, 00:10:39.055 "state": "configuring", 00:10:39.055 "raid_level": "concat", 00:10:39.055 "superblock": false, 00:10:39.055 "num_base_bdevs": 4, 00:10:39.055 "num_base_bdevs_discovered": 3, 00:10:39.055 "num_base_bdevs_operational": 4, 00:10:39.055 "base_bdevs_list": [ 00:10:39.055 { 00:10:39.055 "name": "BaseBdev1", 00:10:39.055 "uuid": "f0ceb764-4a8b-4cfe-9cae-a24511734295", 00:10:39.055 "is_configured": true, 00:10:39.055 "data_offset": 0, 00:10:39.055 "data_size": 65536 00:10:39.055 }, 00:10:39.055 { 00:10:39.055 "name": null, 00:10:39.055 "uuid": "9dd3c707-dcd9-43ef-9399-3b46285775de", 00:10:39.055 "is_configured": false, 00:10:39.055 "data_offset": 0, 00:10:39.055 "data_size": 65536 00:10:39.055 }, 00:10:39.055 { 00:10:39.055 "name": "BaseBdev3", 00:10:39.055 "uuid": "2ce68f74-a04c-4422-81c5-224cd2b4f459", 00:10:39.055 "is_configured": true, 00:10:39.055 "data_offset": 0, 00:10:39.055 "data_size": 65536 00:10:39.055 }, 00:10:39.055 { 00:10:39.055 "name": "BaseBdev4", 00:10:39.055 "uuid": "cca84588-9761-4874-a3d2-6357830b8352", 00:10:39.055 "is_configured": true, 00:10:39.055 "data_offset": 0, 00:10:39.055 "data_size": 65536 00:10:39.055 } 00:10:39.055 ] 00:10:39.055 }' 00:10:39.055 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.055 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.314 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.314 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:39.314 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.314 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.314 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.314 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:39.314 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:39.314 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.314 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.314 [2024-11-19 10:21:52.987157] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:39.314 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.314 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:39.314 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.314 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.314 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.314 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.314 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.314 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.314 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.314 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.314 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.314 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.314 10:21:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.314 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.314 10:21:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.314 10:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.314 10:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.314 "name": "Existed_Raid", 00:10:39.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.314 "strip_size_kb": 64, 00:10:39.314 "state": "configuring", 00:10:39.314 "raid_level": "concat", 00:10:39.314 "superblock": false, 00:10:39.314 "num_base_bdevs": 4, 00:10:39.314 "num_base_bdevs_discovered": 2, 00:10:39.314 "num_base_bdevs_operational": 4, 00:10:39.314 "base_bdevs_list": [ 00:10:39.314 { 00:10:39.314 "name": "BaseBdev1", 00:10:39.314 "uuid": "f0ceb764-4a8b-4cfe-9cae-a24511734295", 00:10:39.314 "is_configured": true, 00:10:39.314 "data_offset": 0, 00:10:39.314 "data_size": 65536 00:10:39.314 }, 00:10:39.314 { 00:10:39.314 "name": null, 00:10:39.314 "uuid": "9dd3c707-dcd9-43ef-9399-3b46285775de", 00:10:39.314 "is_configured": false, 00:10:39.314 "data_offset": 0, 00:10:39.314 "data_size": 65536 00:10:39.314 }, 00:10:39.314 { 00:10:39.314 "name": null, 00:10:39.314 "uuid": "2ce68f74-a04c-4422-81c5-224cd2b4f459", 00:10:39.314 "is_configured": false, 00:10:39.314 "data_offset": 0, 00:10:39.314 "data_size": 65536 00:10:39.314 }, 00:10:39.314 { 00:10:39.314 "name": "BaseBdev4", 00:10:39.314 "uuid": "cca84588-9761-4874-a3d2-6357830b8352", 00:10:39.314 "is_configured": true, 00:10:39.314 "data_offset": 0, 00:10:39.314 "data_size": 65536 00:10:39.314 } 00:10:39.314 ] 00:10:39.314 }' 00:10:39.314 10:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.314 10:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.882 10:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.882 10:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:39.882 10:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.882 10:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.882 10:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.882 10:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:39.882 10:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:39.882 10:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.882 10:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.882 [2024-11-19 10:21:53.458337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:39.882 10:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.882 10:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:39.882 10:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.882 10:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.882 10:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.882 10:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.883 10:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.883 10:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.883 10:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.883 10:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.883 10:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.883 10:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.883 10:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.883 10:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.883 10:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.883 10:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.883 10:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.883 "name": "Existed_Raid", 00:10:39.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.883 "strip_size_kb": 64, 00:10:39.883 "state": "configuring", 00:10:39.883 "raid_level": "concat", 00:10:39.883 "superblock": false, 00:10:39.883 "num_base_bdevs": 4, 00:10:39.883 "num_base_bdevs_discovered": 3, 00:10:39.883 "num_base_bdevs_operational": 4, 00:10:39.883 "base_bdevs_list": [ 00:10:39.883 { 00:10:39.883 "name": "BaseBdev1", 00:10:39.883 "uuid": "f0ceb764-4a8b-4cfe-9cae-a24511734295", 00:10:39.883 "is_configured": true, 00:10:39.883 "data_offset": 0, 00:10:39.883 "data_size": 65536 00:10:39.883 }, 00:10:39.883 { 00:10:39.883 "name": null, 00:10:39.883 "uuid": "9dd3c707-dcd9-43ef-9399-3b46285775de", 00:10:39.883 "is_configured": false, 00:10:39.883 "data_offset": 0, 00:10:39.883 "data_size": 65536 00:10:39.883 }, 00:10:39.883 { 00:10:39.883 "name": "BaseBdev3", 00:10:39.883 "uuid": "2ce68f74-a04c-4422-81c5-224cd2b4f459", 00:10:39.883 "is_configured": true, 00:10:39.883 "data_offset": 0, 00:10:39.883 "data_size": 65536 00:10:39.883 }, 00:10:39.883 { 00:10:39.883 "name": "BaseBdev4", 00:10:39.883 "uuid": "cca84588-9761-4874-a3d2-6357830b8352", 00:10:39.883 "is_configured": true, 00:10:39.883 "data_offset": 0, 00:10:39.883 "data_size": 65536 00:10:39.883 } 00:10:39.883 ] 00:10:39.883 }' 00:10:39.883 10:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.883 10:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.142 10:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.142 10:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:40.142 10:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.142 10:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.142 10:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.401 10:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:40.401 10:21:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:40.401 10:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.401 10:21:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.401 [2024-11-19 10:21:53.945541] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:40.401 10:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.401 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:40.401 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.401 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.401 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.401 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.401 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.401 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.401 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.401 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.401 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.401 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.401 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.401 10:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.401 10:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.401 10:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.401 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.401 "name": "Existed_Raid", 00:10:40.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.401 "strip_size_kb": 64, 00:10:40.401 "state": "configuring", 00:10:40.401 "raid_level": "concat", 00:10:40.401 "superblock": false, 00:10:40.401 "num_base_bdevs": 4, 00:10:40.401 "num_base_bdevs_discovered": 2, 00:10:40.401 "num_base_bdevs_operational": 4, 00:10:40.401 "base_bdevs_list": [ 00:10:40.401 { 00:10:40.401 "name": null, 00:10:40.401 "uuid": "f0ceb764-4a8b-4cfe-9cae-a24511734295", 00:10:40.401 "is_configured": false, 00:10:40.401 "data_offset": 0, 00:10:40.401 "data_size": 65536 00:10:40.401 }, 00:10:40.401 { 00:10:40.401 "name": null, 00:10:40.401 "uuid": "9dd3c707-dcd9-43ef-9399-3b46285775de", 00:10:40.401 "is_configured": false, 00:10:40.401 "data_offset": 0, 00:10:40.401 "data_size": 65536 00:10:40.401 }, 00:10:40.401 { 00:10:40.401 "name": "BaseBdev3", 00:10:40.401 "uuid": "2ce68f74-a04c-4422-81c5-224cd2b4f459", 00:10:40.401 "is_configured": true, 00:10:40.401 "data_offset": 0, 00:10:40.401 "data_size": 65536 00:10:40.401 }, 00:10:40.401 { 00:10:40.401 "name": "BaseBdev4", 00:10:40.401 "uuid": "cca84588-9761-4874-a3d2-6357830b8352", 00:10:40.401 "is_configured": true, 00:10:40.401 "data_offset": 0, 00:10:40.401 "data_size": 65536 00:10:40.401 } 00:10:40.401 ] 00:10:40.401 }' 00:10:40.401 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.401 10:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.970 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.970 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:40.970 10:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.970 10:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.970 10:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.970 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:40.970 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:40.970 10:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.970 10:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.970 [2024-11-19 10:21:54.519429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:40.970 10:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.970 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:40.970 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.970 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.970 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.970 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.970 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.970 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.970 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.970 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.970 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.970 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.970 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.970 10:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.970 10:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.970 10:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.970 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.970 "name": "Existed_Raid", 00:10:40.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.970 "strip_size_kb": 64, 00:10:40.970 "state": "configuring", 00:10:40.970 "raid_level": "concat", 00:10:40.970 "superblock": false, 00:10:40.970 "num_base_bdevs": 4, 00:10:40.970 "num_base_bdevs_discovered": 3, 00:10:40.970 "num_base_bdevs_operational": 4, 00:10:40.970 "base_bdevs_list": [ 00:10:40.970 { 00:10:40.970 "name": null, 00:10:40.970 "uuid": "f0ceb764-4a8b-4cfe-9cae-a24511734295", 00:10:40.970 "is_configured": false, 00:10:40.970 "data_offset": 0, 00:10:40.970 "data_size": 65536 00:10:40.970 }, 00:10:40.970 { 00:10:40.970 "name": "BaseBdev2", 00:10:40.970 "uuid": "9dd3c707-dcd9-43ef-9399-3b46285775de", 00:10:40.970 "is_configured": true, 00:10:40.970 "data_offset": 0, 00:10:40.970 "data_size": 65536 00:10:40.970 }, 00:10:40.970 { 00:10:40.970 "name": "BaseBdev3", 00:10:40.970 "uuid": "2ce68f74-a04c-4422-81c5-224cd2b4f459", 00:10:40.970 "is_configured": true, 00:10:40.970 "data_offset": 0, 00:10:40.970 "data_size": 65536 00:10:40.970 }, 00:10:40.970 { 00:10:40.970 "name": "BaseBdev4", 00:10:40.970 "uuid": "cca84588-9761-4874-a3d2-6357830b8352", 00:10:40.970 "is_configured": true, 00:10:40.970 "data_offset": 0, 00:10:40.970 "data_size": 65536 00:10:40.970 } 00:10:40.970 ] 00:10:40.970 }' 00:10:40.970 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.970 10:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.228 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.228 10:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.228 10:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.228 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:41.228 10:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.228 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:41.228 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.228 10:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.228 10:21:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.228 10:21:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:41.228 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f0ceb764-4a8b-4cfe-9cae-a24511734295 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.487 [2024-11-19 10:21:55.079903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:41.487 [2024-11-19 10:21:55.079961] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:41.487 [2024-11-19 10:21:55.079969] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:41.487 [2024-11-19 10:21:55.080239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:41.487 [2024-11-19 10:21:55.080407] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:41.487 [2024-11-19 10:21:55.080428] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:41.487 [2024-11-19 10:21:55.080674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:41.487 NewBaseBdev 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.487 [ 00:10:41.487 { 00:10:41.487 "name": "NewBaseBdev", 00:10:41.487 "aliases": [ 00:10:41.487 "f0ceb764-4a8b-4cfe-9cae-a24511734295" 00:10:41.487 ], 00:10:41.487 "product_name": "Malloc disk", 00:10:41.487 "block_size": 512, 00:10:41.487 "num_blocks": 65536, 00:10:41.487 "uuid": "f0ceb764-4a8b-4cfe-9cae-a24511734295", 00:10:41.487 "assigned_rate_limits": { 00:10:41.487 "rw_ios_per_sec": 0, 00:10:41.487 "rw_mbytes_per_sec": 0, 00:10:41.487 "r_mbytes_per_sec": 0, 00:10:41.487 "w_mbytes_per_sec": 0 00:10:41.487 }, 00:10:41.487 "claimed": true, 00:10:41.487 "claim_type": "exclusive_write", 00:10:41.487 "zoned": false, 00:10:41.487 "supported_io_types": { 00:10:41.487 "read": true, 00:10:41.487 "write": true, 00:10:41.487 "unmap": true, 00:10:41.487 "flush": true, 00:10:41.487 "reset": true, 00:10:41.487 "nvme_admin": false, 00:10:41.487 "nvme_io": false, 00:10:41.487 "nvme_io_md": false, 00:10:41.487 "write_zeroes": true, 00:10:41.487 "zcopy": true, 00:10:41.487 "get_zone_info": false, 00:10:41.487 "zone_management": false, 00:10:41.487 "zone_append": false, 00:10:41.487 "compare": false, 00:10:41.487 "compare_and_write": false, 00:10:41.487 "abort": true, 00:10:41.487 "seek_hole": false, 00:10:41.487 "seek_data": false, 00:10:41.487 "copy": true, 00:10:41.487 "nvme_iov_md": false 00:10:41.487 }, 00:10:41.487 "memory_domains": [ 00:10:41.487 { 00:10:41.487 "dma_device_id": "system", 00:10:41.487 "dma_device_type": 1 00:10:41.487 }, 00:10:41.487 { 00:10:41.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.487 "dma_device_type": 2 00:10:41.487 } 00:10:41.487 ], 00:10:41.487 "driver_specific": {} 00:10:41.487 } 00:10:41.487 ] 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.487 "name": "Existed_Raid", 00:10:41.487 "uuid": "8d81f2ee-bb4b-474a-80b6-be1bab8b5394", 00:10:41.487 "strip_size_kb": 64, 00:10:41.487 "state": "online", 00:10:41.487 "raid_level": "concat", 00:10:41.487 "superblock": false, 00:10:41.487 "num_base_bdevs": 4, 00:10:41.487 "num_base_bdevs_discovered": 4, 00:10:41.487 "num_base_bdevs_operational": 4, 00:10:41.487 "base_bdevs_list": [ 00:10:41.487 { 00:10:41.487 "name": "NewBaseBdev", 00:10:41.487 "uuid": "f0ceb764-4a8b-4cfe-9cae-a24511734295", 00:10:41.487 "is_configured": true, 00:10:41.487 "data_offset": 0, 00:10:41.487 "data_size": 65536 00:10:41.487 }, 00:10:41.487 { 00:10:41.487 "name": "BaseBdev2", 00:10:41.487 "uuid": "9dd3c707-dcd9-43ef-9399-3b46285775de", 00:10:41.487 "is_configured": true, 00:10:41.487 "data_offset": 0, 00:10:41.487 "data_size": 65536 00:10:41.487 }, 00:10:41.487 { 00:10:41.487 "name": "BaseBdev3", 00:10:41.487 "uuid": "2ce68f74-a04c-4422-81c5-224cd2b4f459", 00:10:41.487 "is_configured": true, 00:10:41.487 "data_offset": 0, 00:10:41.487 "data_size": 65536 00:10:41.487 }, 00:10:41.487 { 00:10:41.487 "name": "BaseBdev4", 00:10:41.487 "uuid": "cca84588-9761-4874-a3d2-6357830b8352", 00:10:41.487 "is_configured": true, 00:10:41.487 "data_offset": 0, 00:10:41.487 "data_size": 65536 00:10:41.487 } 00:10:41.487 ] 00:10:41.487 }' 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.487 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:42.055 [2024-11-19 10:21:55.559571] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:42.055 "name": "Existed_Raid", 00:10:42.055 "aliases": [ 00:10:42.055 "8d81f2ee-bb4b-474a-80b6-be1bab8b5394" 00:10:42.055 ], 00:10:42.055 "product_name": "Raid Volume", 00:10:42.055 "block_size": 512, 00:10:42.055 "num_blocks": 262144, 00:10:42.055 "uuid": "8d81f2ee-bb4b-474a-80b6-be1bab8b5394", 00:10:42.055 "assigned_rate_limits": { 00:10:42.055 "rw_ios_per_sec": 0, 00:10:42.055 "rw_mbytes_per_sec": 0, 00:10:42.055 "r_mbytes_per_sec": 0, 00:10:42.055 "w_mbytes_per_sec": 0 00:10:42.055 }, 00:10:42.055 "claimed": false, 00:10:42.055 "zoned": false, 00:10:42.055 "supported_io_types": { 00:10:42.055 "read": true, 00:10:42.055 "write": true, 00:10:42.055 "unmap": true, 00:10:42.055 "flush": true, 00:10:42.055 "reset": true, 00:10:42.055 "nvme_admin": false, 00:10:42.055 "nvme_io": false, 00:10:42.055 "nvme_io_md": false, 00:10:42.055 "write_zeroes": true, 00:10:42.055 "zcopy": false, 00:10:42.055 "get_zone_info": false, 00:10:42.055 "zone_management": false, 00:10:42.055 "zone_append": false, 00:10:42.055 "compare": false, 00:10:42.055 "compare_and_write": false, 00:10:42.055 "abort": false, 00:10:42.055 "seek_hole": false, 00:10:42.055 "seek_data": false, 00:10:42.055 "copy": false, 00:10:42.055 "nvme_iov_md": false 00:10:42.055 }, 00:10:42.055 "memory_domains": [ 00:10:42.055 { 00:10:42.055 "dma_device_id": "system", 00:10:42.055 "dma_device_type": 1 00:10:42.055 }, 00:10:42.055 { 00:10:42.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.055 "dma_device_type": 2 00:10:42.055 }, 00:10:42.055 { 00:10:42.055 "dma_device_id": "system", 00:10:42.055 "dma_device_type": 1 00:10:42.055 }, 00:10:42.055 { 00:10:42.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.055 "dma_device_type": 2 00:10:42.055 }, 00:10:42.055 { 00:10:42.055 "dma_device_id": "system", 00:10:42.055 "dma_device_type": 1 00:10:42.055 }, 00:10:42.055 { 00:10:42.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.055 "dma_device_type": 2 00:10:42.055 }, 00:10:42.055 { 00:10:42.055 "dma_device_id": "system", 00:10:42.055 "dma_device_type": 1 00:10:42.055 }, 00:10:42.055 { 00:10:42.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.055 "dma_device_type": 2 00:10:42.055 } 00:10:42.055 ], 00:10:42.055 "driver_specific": { 00:10:42.055 "raid": { 00:10:42.055 "uuid": "8d81f2ee-bb4b-474a-80b6-be1bab8b5394", 00:10:42.055 "strip_size_kb": 64, 00:10:42.055 "state": "online", 00:10:42.055 "raid_level": "concat", 00:10:42.055 "superblock": false, 00:10:42.055 "num_base_bdevs": 4, 00:10:42.055 "num_base_bdevs_discovered": 4, 00:10:42.055 "num_base_bdevs_operational": 4, 00:10:42.055 "base_bdevs_list": [ 00:10:42.055 { 00:10:42.055 "name": "NewBaseBdev", 00:10:42.055 "uuid": "f0ceb764-4a8b-4cfe-9cae-a24511734295", 00:10:42.055 "is_configured": true, 00:10:42.055 "data_offset": 0, 00:10:42.055 "data_size": 65536 00:10:42.055 }, 00:10:42.055 { 00:10:42.055 "name": "BaseBdev2", 00:10:42.055 "uuid": "9dd3c707-dcd9-43ef-9399-3b46285775de", 00:10:42.055 "is_configured": true, 00:10:42.055 "data_offset": 0, 00:10:42.055 "data_size": 65536 00:10:42.055 }, 00:10:42.055 { 00:10:42.055 "name": "BaseBdev3", 00:10:42.055 "uuid": "2ce68f74-a04c-4422-81c5-224cd2b4f459", 00:10:42.055 "is_configured": true, 00:10:42.055 "data_offset": 0, 00:10:42.055 "data_size": 65536 00:10:42.055 }, 00:10:42.055 { 00:10:42.055 "name": "BaseBdev4", 00:10:42.055 "uuid": "cca84588-9761-4874-a3d2-6357830b8352", 00:10:42.055 "is_configured": true, 00:10:42.055 "data_offset": 0, 00:10:42.055 "data_size": 65536 00:10:42.055 } 00:10:42.055 ] 00:10:42.055 } 00:10:42.055 } 00:10:42.055 }' 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:42.055 BaseBdev2 00:10:42.055 BaseBdev3 00:10:42.055 BaseBdev4' 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.055 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.056 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.056 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.056 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:42.056 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.056 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.056 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.056 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.056 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.056 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.056 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:42.056 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.056 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.056 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.056 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.314 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.314 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.314 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:42.314 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.314 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.314 [2024-11-19 10:21:55.842659] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:42.314 [2024-11-19 10:21:55.842692] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:42.314 [2024-11-19 10:21:55.842781] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:42.314 [2024-11-19 10:21:55.842853] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:42.314 [2024-11-19 10:21:55.842874] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:42.314 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.314 10:21:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71057 00:10:42.314 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71057 ']' 00:10:42.314 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71057 00:10:42.314 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:42.314 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:42.314 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71057 00:10:42.314 killing process with pid 71057 00:10:42.314 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:42.314 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:42.314 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71057' 00:10:42.314 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71057 00:10:42.314 [2024-11-19 10:21:55.875309] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:42.314 10:21:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71057 00:10:42.572 [2024-11-19 10:21:56.282203] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:43.948 00:10:43.948 real 0m11.150s 00:10:43.948 user 0m17.810s 00:10:43.948 sys 0m1.890s 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.948 ************************************ 00:10:43.948 END TEST raid_state_function_test 00:10:43.948 ************************************ 00:10:43.948 10:21:57 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:43.948 10:21:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:43.948 10:21:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:43.948 10:21:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:43.948 ************************************ 00:10:43.948 START TEST raid_state_function_test_sb 00:10:43.948 ************************************ 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71723 00:10:43.948 Process raid pid: 71723 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71723' 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71723 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71723 ']' 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:43.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.948 10:21:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:43.948 [2024-11-19 10:21:57.521253] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:10:43.948 [2024-11-19 10:21:57.521381] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.948 [2024-11-19 10:21:57.694279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.207 [2024-11-19 10:21:57.810558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.464 [2024-11-19 10:21:58.018234] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.464 [2024-11-19 10:21:58.018357] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.723 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:44.723 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:44.723 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:44.723 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.723 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.723 [2024-11-19 10:21:58.348643] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:44.724 [2024-11-19 10:21:58.348695] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:44.724 [2024-11-19 10:21:58.348706] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:44.724 [2024-11-19 10:21:58.348716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:44.724 [2024-11-19 10:21:58.348723] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:44.724 [2024-11-19 10:21:58.348732] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:44.724 [2024-11-19 10:21:58.348738] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:44.724 [2024-11-19 10:21:58.348746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:44.724 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.724 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:44.724 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.724 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.724 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:44.724 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.724 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.724 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.724 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.724 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.724 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.724 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.724 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.724 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.724 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.724 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.724 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.724 "name": "Existed_Raid", 00:10:44.724 "uuid": "f51e22a9-ca64-4d4f-bc5c-7c9bfea180fa", 00:10:44.724 "strip_size_kb": 64, 00:10:44.724 "state": "configuring", 00:10:44.724 "raid_level": "concat", 00:10:44.724 "superblock": true, 00:10:44.724 "num_base_bdevs": 4, 00:10:44.724 "num_base_bdevs_discovered": 0, 00:10:44.724 "num_base_bdevs_operational": 4, 00:10:44.724 "base_bdevs_list": [ 00:10:44.724 { 00:10:44.724 "name": "BaseBdev1", 00:10:44.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.724 "is_configured": false, 00:10:44.724 "data_offset": 0, 00:10:44.724 "data_size": 0 00:10:44.724 }, 00:10:44.724 { 00:10:44.724 "name": "BaseBdev2", 00:10:44.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.724 "is_configured": false, 00:10:44.724 "data_offset": 0, 00:10:44.724 "data_size": 0 00:10:44.724 }, 00:10:44.724 { 00:10:44.724 "name": "BaseBdev3", 00:10:44.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.724 "is_configured": false, 00:10:44.724 "data_offset": 0, 00:10:44.724 "data_size": 0 00:10:44.724 }, 00:10:44.724 { 00:10:44.724 "name": "BaseBdev4", 00:10:44.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:44.724 "is_configured": false, 00:10:44.724 "data_offset": 0, 00:10:44.724 "data_size": 0 00:10:44.724 } 00:10:44.724 ] 00:10:44.724 }' 00:10:44.724 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.724 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.294 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:45.294 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.294 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.294 [2024-11-19 10:21:58.783854] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:45.294 [2024-11-19 10:21:58.783939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:45.294 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.294 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:45.294 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.294 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.294 [2024-11-19 10:21:58.791850] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:45.294 [2024-11-19 10:21:58.791933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:45.294 [2024-11-19 10:21:58.791971] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:45.294 [2024-11-19 10:21:58.792012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:45.294 [2024-11-19 10:21:58.792038] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:45.294 [2024-11-19 10:21:58.792061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:45.294 [2024-11-19 10:21:58.792111] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:45.294 [2024-11-19 10:21:58.792123] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:45.294 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.294 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:45.294 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.294 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.294 [2024-11-19 10:21:58.836826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:45.294 BaseBdev1 00:10:45.294 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.294 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:45.294 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:45.294 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.294 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:45.294 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.294 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.294 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.294 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.294 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.294 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.294 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:45.294 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.294 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.294 [ 00:10:45.294 { 00:10:45.294 "name": "BaseBdev1", 00:10:45.294 "aliases": [ 00:10:45.294 "78541351-913d-43c7-90a9-bf92fa433c6c" 00:10:45.294 ], 00:10:45.294 "product_name": "Malloc disk", 00:10:45.294 "block_size": 512, 00:10:45.294 "num_blocks": 65536, 00:10:45.294 "uuid": "78541351-913d-43c7-90a9-bf92fa433c6c", 00:10:45.294 "assigned_rate_limits": { 00:10:45.294 "rw_ios_per_sec": 0, 00:10:45.294 "rw_mbytes_per_sec": 0, 00:10:45.294 "r_mbytes_per_sec": 0, 00:10:45.294 "w_mbytes_per_sec": 0 00:10:45.294 }, 00:10:45.294 "claimed": true, 00:10:45.294 "claim_type": "exclusive_write", 00:10:45.294 "zoned": false, 00:10:45.294 "supported_io_types": { 00:10:45.294 "read": true, 00:10:45.294 "write": true, 00:10:45.294 "unmap": true, 00:10:45.294 "flush": true, 00:10:45.294 "reset": true, 00:10:45.294 "nvme_admin": false, 00:10:45.294 "nvme_io": false, 00:10:45.294 "nvme_io_md": false, 00:10:45.294 "write_zeroes": true, 00:10:45.294 "zcopy": true, 00:10:45.294 "get_zone_info": false, 00:10:45.294 "zone_management": false, 00:10:45.294 "zone_append": false, 00:10:45.294 "compare": false, 00:10:45.294 "compare_and_write": false, 00:10:45.294 "abort": true, 00:10:45.294 "seek_hole": false, 00:10:45.294 "seek_data": false, 00:10:45.294 "copy": true, 00:10:45.294 "nvme_iov_md": false 00:10:45.294 }, 00:10:45.294 "memory_domains": [ 00:10:45.294 { 00:10:45.294 "dma_device_id": "system", 00:10:45.294 "dma_device_type": 1 00:10:45.294 }, 00:10:45.294 { 00:10:45.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.294 "dma_device_type": 2 00:10:45.294 } 00:10:45.294 ], 00:10:45.294 "driver_specific": {} 00:10:45.294 } 00:10:45.294 ] 00:10:45.295 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.295 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:45.295 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:45.295 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.295 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.295 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.295 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.295 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.295 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.295 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.295 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.295 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.295 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.295 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.295 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.295 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.295 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.295 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.295 "name": "Existed_Raid", 00:10:45.295 "uuid": "3718bb5a-9442-4154-afd5-67362893212c", 00:10:45.295 "strip_size_kb": 64, 00:10:45.295 "state": "configuring", 00:10:45.295 "raid_level": "concat", 00:10:45.295 "superblock": true, 00:10:45.295 "num_base_bdevs": 4, 00:10:45.295 "num_base_bdevs_discovered": 1, 00:10:45.295 "num_base_bdevs_operational": 4, 00:10:45.295 "base_bdevs_list": [ 00:10:45.295 { 00:10:45.295 "name": "BaseBdev1", 00:10:45.295 "uuid": "78541351-913d-43c7-90a9-bf92fa433c6c", 00:10:45.295 "is_configured": true, 00:10:45.295 "data_offset": 2048, 00:10:45.295 "data_size": 63488 00:10:45.295 }, 00:10:45.295 { 00:10:45.295 "name": "BaseBdev2", 00:10:45.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.295 "is_configured": false, 00:10:45.295 "data_offset": 0, 00:10:45.295 "data_size": 0 00:10:45.295 }, 00:10:45.295 { 00:10:45.295 "name": "BaseBdev3", 00:10:45.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.295 "is_configured": false, 00:10:45.295 "data_offset": 0, 00:10:45.295 "data_size": 0 00:10:45.295 }, 00:10:45.295 { 00:10:45.295 "name": "BaseBdev4", 00:10:45.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.295 "is_configured": false, 00:10:45.295 "data_offset": 0, 00:10:45.295 "data_size": 0 00:10:45.295 } 00:10:45.295 ] 00:10:45.295 }' 00:10:45.295 10:21:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.295 10:21:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.554 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:45.554 10:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.554 10:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.554 [2024-11-19 10:21:59.316047] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:45.554 [2024-11-19 10:21:59.316142] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:45.554 10:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.554 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:45.554 10:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.554 10:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.554 [2024-11-19 10:21:59.324095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:45.554 [2024-11-19 10:21:59.325929] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:45.554 [2024-11-19 10:21:59.326015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:45.554 [2024-11-19 10:21:59.326051] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:45.554 [2024-11-19 10:21:59.326077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:45.554 [2024-11-19 10:21:59.326128] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:45.554 [2024-11-19 10:21:59.326152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:45.554 10:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.555 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:45.555 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:45.555 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:45.555 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.555 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.555 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:45.555 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.555 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.555 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.555 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.555 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.555 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.555 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.555 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.555 10:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.815 10:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.815 10:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.815 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.815 "name": "Existed_Raid", 00:10:45.815 "uuid": "e4ad4a73-b923-498b-bd54-51c6e9899931", 00:10:45.815 "strip_size_kb": 64, 00:10:45.815 "state": "configuring", 00:10:45.815 "raid_level": "concat", 00:10:45.815 "superblock": true, 00:10:45.815 "num_base_bdevs": 4, 00:10:45.815 "num_base_bdevs_discovered": 1, 00:10:45.815 "num_base_bdevs_operational": 4, 00:10:45.815 "base_bdevs_list": [ 00:10:45.815 { 00:10:45.815 "name": "BaseBdev1", 00:10:45.815 "uuid": "78541351-913d-43c7-90a9-bf92fa433c6c", 00:10:45.815 "is_configured": true, 00:10:45.815 "data_offset": 2048, 00:10:45.815 "data_size": 63488 00:10:45.815 }, 00:10:45.815 { 00:10:45.815 "name": "BaseBdev2", 00:10:45.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.815 "is_configured": false, 00:10:45.815 "data_offset": 0, 00:10:45.815 "data_size": 0 00:10:45.815 }, 00:10:45.815 { 00:10:45.815 "name": "BaseBdev3", 00:10:45.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.815 "is_configured": false, 00:10:45.815 "data_offset": 0, 00:10:45.815 "data_size": 0 00:10:45.815 }, 00:10:45.815 { 00:10:45.815 "name": "BaseBdev4", 00:10:45.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.815 "is_configured": false, 00:10:45.815 "data_offset": 0, 00:10:45.816 "data_size": 0 00:10:45.816 } 00:10:45.816 ] 00:10:45.816 }' 00:10:45.816 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.816 10:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.075 [2024-11-19 10:21:59.782272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:46.075 BaseBdev2 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.075 [ 00:10:46.075 { 00:10:46.075 "name": "BaseBdev2", 00:10:46.075 "aliases": [ 00:10:46.075 "ee93c1d3-8fc5-41ed-9520-0751fee47193" 00:10:46.075 ], 00:10:46.075 "product_name": "Malloc disk", 00:10:46.075 "block_size": 512, 00:10:46.075 "num_blocks": 65536, 00:10:46.075 "uuid": "ee93c1d3-8fc5-41ed-9520-0751fee47193", 00:10:46.075 "assigned_rate_limits": { 00:10:46.075 "rw_ios_per_sec": 0, 00:10:46.075 "rw_mbytes_per_sec": 0, 00:10:46.075 "r_mbytes_per_sec": 0, 00:10:46.075 "w_mbytes_per_sec": 0 00:10:46.075 }, 00:10:46.075 "claimed": true, 00:10:46.075 "claim_type": "exclusive_write", 00:10:46.075 "zoned": false, 00:10:46.075 "supported_io_types": { 00:10:46.075 "read": true, 00:10:46.075 "write": true, 00:10:46.075 "unmap": true, 00:10:46.075 "flush": true, 00:10:46.075 "reset": true, 00:10:46.075 "nvme_admin": false, 00:10:46.075 "nvme_io": false, 00:10:46.075 "nvme_io_md": false, 00:10:46.075 "write_zeroes": true, 00:10:46.075 "zcopy": true, 00:10:46.075 "get_zone_info": false, 00:10:46.075 "zone_management": false, 00:10:46.075 "zone_append": false, 00:10:46.075 "compare": false, 00:10:46.075 "compare_and_write": false, 00:10:46.075 "abort": true, 00:10:46.075 "seek_hole": false, 00:10:46.075 "seek_data": false, 00:10:46.075 "copy": true, 00:10:46.075 "nvme_iov_md": false 00:10:46.075 }, 00:10:46.075 "memory_domains": [ 00:10:46.075 { 00:10:46.075 "dma_device_id": "system", 00:10:46.075 "dma_device_type": 1 00:10:46.075 }, 00:10:46.075 { 00:10:46.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.075 "dma_device_type": 2 00:10:46.075 } 00:10:46.075 ], 00:10:46.075 "driver_specific": {} 00:10:46.075 } 00:10:46.075 ] 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.075 10:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.334 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.334 "name": "Existed_Raid", 00:10:46.334 "uuid": "e4ad4a73-b923-498b-bd54-51c6e9899931", 00:10:46.334 "strip_size_kb": 64, 00:10:46.334 "state": "configuring", 00:10:46.334 "raid_level": "concat", 00:10:46.334 "superblock": true, 00:10:46.334 "num_base_bdevs": 4, 00:10:46.334 "num_base_bdevs_discovered": 2, 00:10:46.334 "num_base_bdevs_operational": 4, 00:10:46.334 "base_bdevs_list": [ 00:10:46.334 { 00:10:46.334 "name": "BaseBdev1", 00:10:46.334 "uuid": "78541351-913d-43c7-90a9-bf92fa433c6c", 00:10:46.334 "is_configured": true, 00:10:46.334 "data_offset": 2048, 00:10:46.334 "data_size": 63488 00:10:46.334 }, 00:10:46.334 { 00:10:46.334 "name": "BaseBdev2", 00:10:46.334 "uuid": "ee93c1d3-8fc5-41ed-9520-0751fee47193", 00:10:46.334 "is_configured": true, 00:10:46.334 "data_offset": 2048, 00:10:46.334 "data_size": 63488 00:10:46.334 }, 00:10:46.334 { 00:10:46.334 "name": "BaseBdev3", 00:10:46.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.334 "is_configured": false, 00:10:46.334 "data_offset": 0, 00:10:46.334 "data_size": 0 00:10:46.334 }, 00:10:46.334 { 00:10:46.334 "name": "BaseBdev4", 00:10:46.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.334 "is_configured": false, 00:10:46.334 "data_offset": 0, 00:10:46.334 "data_size": 0 00:10:46.334 } 00:10:46.334 ] 00:10:46.334 }' 00:10:46.335 10:21:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.335 10:21:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.594 [2024-11-19 10:22:00.330796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:46.594 BaseBdev3 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.594 [ 00:10:46.594 { 00:10:46.594 "name": "BaseBdev3", 00:10:46.594 "aliases": [ 00:10:46.594 "ae357409-f613-4933-a18a-37966c8378d9" 00:10:46.594 ], 00:10:46.594 "product_name": "Malloc disk", 00:10:46.594 "block_size": 512, 00:10:46.594 "num_blocks": 65536, 00:10:46.594 "uuid": "ae357409-f613-4933-a18a-37966c8378d9", 00:10:46.594 "assigned_rate_limits": { 00:10:46.594 "rw_ios_per_sec": 0, 00:10:46.594 "rw_mbytes_per_sec": 0, 00:10:46.594 "r_mbytes_per_sec": 0, 00:10:46.594 "w_mbytes_per_sec": 0 00:10:46.594 }, 00:10:46.594 "claimed": true, 00:10:46.594 "claim_type": "exclusive_write", 00:10:46.594 "zoned": false, 00:10:46.594 "supported_io_types": { 00:10:46.594 "read": true, 00:10:46.594 "write": true, 00:10:46.594 "unmap": true, 00:10:46.594 "flush": true, 00:10:46.594 "reset": true, 00:10:46.594 "nvme_admin": false, 00:10:46.594 "nvme_io": false, 00:10:46.594 "nvme_io_md": false, 00:10:46.594 "write_zeroes": true, 00:10:46.594 "zcopy": true, 00:10:46.594 "get_zone_info": false, 00:10:46.594 "zone_management": false, 00:10:46.594 "zone_append": false, 00:10:46.594 "compare": false, 00:10:46.594 "compare_and_write": false, 00:10:46.594 "abort": true, 00:10:46.594 "seek_hole": false, 00:10:46.594 "seek_data": false, 00:10:46.594 "copy": true, 00:10:46.594 "nvme_iov_md": false 00:10:46.594 }, 00:10:46.594 "memory_domains": [ 00:10:46.594 { 00:10:46.594 "dma_device_id": "system", 00:10:46.594 "dma_device_type": 1 00:10:46.594 }, 00:10:46.594 { 00:10:46.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.594 "dma_device_type": 2 00:10:46.594 } 00:10:46.594 ], 00:10:46.594 "driver_specific": {} 00:10:46.594 } 00:10:46.594 ] 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.594 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.868 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.868 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.868 "name": "Existed_Raid", 00:10:46.868 "uuid": "e4ad4a73-b923-498b-bd54-51c6e9899931", 00:10:46.868 "strip_size_kb": 64, 00:10:46.868 "state": "configuring", 00:10:46.868 "raid_level": "concat", 00:10:46.868 "superblock": true, 00:10:46.868 "num_base_bdevs": 4, 00:10:46.868 "num_base_bdevs_discovered": 3, 00:10:46.868 "num_base_bdevs_operational": 4, 00:10:46.868 "base_bdevs_list": [ 00:10:46.868 { 00:10:46.868 "name": "BaseBdev1", 00:10:46.868 "uuid": "78541351-913d-43c7-90a9-bf92fa433c6c", 00:10:46.868 "is_configured": true, 00:10:46.868 "data_offset": 2048, 00:10:46.868 "data_size": 63488 00:10:46.868 }, 00:10:46.868 { 00:10:46.868 "name": "BaseBdev2", 00:10:46.868 "uuid": "ee93c1d3-8fc5-41ed-9520-0751fee47193", 00:10:46.868 "is_configured": true, 00:10:46.868 "data_offset": 2048, 00:10:46.868 "data_size": 63488 00:10:46.868 }, 00:10:46.868 { 00:10:46.868 "name": "BaseBdev3", 00:10:46.868 "uuid": "ae357409-f613-4933-a18a-37966c8378d9", 00:10:46.868 "is_configured": true, 00:10:46.868 "data_offset": 2048, 00:10:46.868 "data_size": 63488 00:10:46.868 }, 00:10:46.868 { 00:10:46.868 "name": "BaseBdev4", 00:10:46.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.868 "is_configured": false, 00:10:46.868 "data_offset": 0, 00:10:46.868 "data_size": 0 00:10:46.868 } 00:10:46.868 ] 00:10:46.868 }' 00:10:46.868 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.868 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.150 [2024-11-19 10:22:00.809296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:47.150 [2024-11-19 10:22:00.809652] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:47.150 [2024-11-19 10:22:00.809705] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:47.150 [2024-11-19 10:22:00.809989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:47.150 BaseBdev4 00:10:47.150 [2024-11-19 10:22:00.810194] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:47.150 [2024-11-19 10:22:00.810243] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.150 [2024-11-19 10:22:00.810436] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.150 [ 00:10:47.150 { 00:10:47.150 "name": "BaseBdev4", 00:10:47.150 "aliases": [ 00:10:47.150 "40141276-3973-4312-8b85-53521149bcfb" 00:10:47.150 ], 00:10:47.150 "product_name": "Malloc disk", 00:10:47.150 "block_size": 512, 00:10:47.150 "num_blocks": 65536, 00:10:47.150 "uuid": "40141276-3973-4312-8b85-53521149bcfb", 00:10:47.150 "assigned_rate_limits": { 00:10:47.150 "rw_ios_per_sec": 0, 00:10:47.150 "rw_mbytes_per_sec": 0, 00:10:47.150 "r_mbytes_per_sec": 0, 00:10:47.150 "w_mbytes_per_sec": 0 00:10:47.150 }, 00:10:47.150 "claimed": true, 00:10:47.150 "claim_type": "exclusive_write", 00:10:47.150 "zoned": false, 00:10:47.150 "supported_io_types": { 00:10:47.150 "read": true, 00:10:47.150 "write": true, 00:10:47.150 "unmap": true, 00:10:47.150 "flush": true, 00:10:47.150 "reset": true, 00:10:47.150 "nvme_admin": false, 00:10:47.150 "nvme_io": false, 00:10:47.150 "nvme_io_md": false, 00:10:47.150 "write_zeroes": true, 00:10:47.150 "zcopy": true, 00:10:47.150 "get_zone_info": false, 00:10:47.150 "zone_management": false, 00:10:47.150 "zone_append": false, 00:10:47.150 "compare": false, 00:10:47.150 "compare_and_write": false, 00:10:47.150 "abort": true, 00:10:47.150 "seek_hole": false, 00:10:47.150 "seek_data": false, 00:10:47.150 "copy": true, 00:10:47.150 "nvme_iov_md": false 00:10:47.150 }, 00:10:47.150 "memory_domains": [ 00:10:47.150 { 00:10:47.150 "dma_device_id": "system", 00:10:47.150 "dma_device_type": 1 00:10:47.150 }, 00:10:47.150 { 00:10:47.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.150 "dma_device_type": 2 00:10:47.150 } 00:10:47.150 ], 00:10:47.150 "driver_specific": {} 00:10:47.150 } 00:10:47.150 ] 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.150 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.150 "name": "Existed_Raid", 00:10:47.150 "uuid": "e4ad4a73-b923-498b-bd54-51c6e9899931", 00:10:47.150 "strip_size_kb": 64, 00:10:47.150 "state": "online", 00:10:47.150 "raid_level": "concat", 00:10:47.150 "superblock": true, 00:10:47.150 "num_base_bdevs": 4, 00:10:47.150 "num_base_bdevs_discovered": 4, 00:10:47.150 "num_base_bdevs_operational": 4, 00:10:47.150 "base_bdevs_list": [ 00:10:47.150 { 00:10:47.150 "name": "BaseBdev1", 00:10:47.150 "uuid": "78541351-913d-43c7-90a9-bf92fa433c6c", 00:10:47.150 "is_configured": true, 00:10:47.150 "data_offset": 2048, 00:10:47.150 "data_size": 63488 00:10:47.150 }, 00:10:47.150 { 00:10:47.150 "name": "BaseBdev2", 00:10:47.150 "uuid": "ee93c1d3-8fc5-41ed-9520-0751fee47193", 00:10:47.150 "is_configured": true, 00:10:47.150 "data_offset": 2048, 00:10:47.150 "data_size": 63488 00:10:47.150 }, 00:10:47.150 { 00:10:47.150 "name": "BaseBdev3", 00:10:47.151 "uuid": "ae357409-f613-4933-a18a-37966c8378d9", 00:10:47.151 "is_configured": true, 00:10:47.151 "data_offset": 2048, 00:10:47.151 "data_size": 63488 00:10:47.151 }, 00:10:47.151 { 00:10:47.151 "name": "BaseBdev4", 00:10:47.151 "uuid": "40141276-3973-4312-8b85-53521149bcfb", 00:10:47.151 "is_configured": true, 00:10:47.151 "data_offset": 2048, 00:10:47.151 "data_size": 63488 00:10:47.151 } 00:10:47.151 ] 00:10:47.151 }' 00:10:47.151 10:22:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.151 10:22:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.719 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:47.719 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:47.719 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:47.719 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:47.719 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:47.719 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:47.719 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:47.719 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:47.719 10:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.719 10:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.719 [2024-11-19 10:22:01.240957] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:47.719 10:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.719 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:47.719 "name": "Existed_Raid", 00:10:47.719 "aliases": [ 00:10:47.719 "e4ad4a73-b923-498b-bd54-51c6e9899931" 00:10:47.719 ], 00:10:47.719 "product_name": "Raid Volume", 00:10:47.719 "block_size": 512, 00:10:47.719 "num_blocks": 253952, 00:10:47.719 "uuid": "e4ad4a73-b923-498b-bd54-51c6e9899931", 00:10:47.719 "assigned_rate_limits": { 00:10:47.719 "rw_ios_per_sec": 0, 00:10:47.719 "rw_mbytes_per_sec": 0, 00:10:47.719 "r_mbytes_per_sec": 0, 00:10:47.719 "w_mbytes_per_sec": 0 00:10:47.719 }, 00:10:47.719 "claimed": false, 00:10:47.719 "zoned": false, 00:10:47.719 "supported_io_types": { 00:10:47.719 "read": true, 00:10:47.719 "write": true, 00:10:47.719 "unmap": true, 00:10:47.720 "flush": true, 00:10:47.720 "reset": true, 00:10:47.720 "nvme_admin": false, 00:10:47.720 "nvme_io": false, 00:10:47.720 "nvme_io_md": false, 00:10:47.720 "write_zeroes": true, 00:10:47.720 "zcopy": false, 00:10:47.720 "get_zone_info": false, 00:10:47.720 "zone_management": false, 00:10:47.720 "zone_append": false, 00:10:47.720 "compare": false, 00:10:47.720 "compare_and_write": false, 00:10:47.720 "abort": false, 00:10:47.720 "seek_hole": false, 00:10:47.720 "seek_data": false, 00:10:47.720 "copy": false, 00:10:47.720 "nvme_iov_md": false 00:10:47.720 }, 00:10:47.720 "memory_domains": [ 00:10:47.720 { 00:10:47.720 "dma_device_id": "system", 00:10:47.720 "dma_device_type": 1 00:10:47.720 }, 00:10:47.720 { 00:10:47.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.720 "dma_device_type": 2 00:10:47.720 }, 00:10:47.720 { 00:10:47.720 "dma_device_id": "system", 00:10:47.720 "dma_device_type": 1 00:10:47.720 }, 00:10:47.720 { 00:10:47.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.720 "dma_device_type": 2 00:10:47.720 }, 00:10:47.720 { 00:10:47.720 "dma_device_id": "system", 00:10:47.720 "dma_device_type": 1 00:10:47.720 }, 00:10:47.720 { 00:10:47.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.720 "dma_device_type": 2 00:10:47.720 }, 00:10:47.720 { 00:10:47.720 "dma_device_id": "system", 00:10:47.720 "dma_device_type": 1 00:10:47.720 }, 00:10:47.720 { 00:10:47.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.720 "dma_device_type": 2 00:10:47.720 } 00:10:47.720 ], 00:10:47.720 "driver_specific": { 00:10:47.720 "raid": { 00:10:47.720 "uuid": "e4ad4a73-b923-498b-bd54-51c6e9899931", 00:10:47.720 "strip_size_kb": 64, 00:10:47.720 "state": "online", 00:10:47.720 "raid_level": "concat", 00:10:47.720 "superblock": true, 00:10:47.720 "num_base_bdevs": 4, 00:10:47.720 "num_base_bdevs_discovered": 4, 00:10:47.720 "num_base_bdevs_operational": 4, 00:10:47.720 "base_bdevs_list": [ 00:10:47.720 { 00:10:47.720 "name": "BaseBdev1", 00:10:47.720 "uuid": "78541351-913d-43c7-90a9-bf92fa433c6c", 00:10:47.720 "is_configured": true, 00:10:47.720 "data_offset": 2048, 00:10:47.720 "data_size": 63488 00:10:47.720 }, 00:10:47.720 { 00:10:47.720 "name": "BaseBdev2", 00:10:47.720 "uuid": "ee93c1d3-8fc5-41ed-9520-0751fee47193", 00:10:47.720 "is_configured": true, 00:10:47.720 "data_offset": 2048, 00:10:47.720 "data_size": 63488 00:10:47.720 }, 00:10:47.720 { 00:10:47.720 "name": "BaseBdev3", 00:10:47.720 "uuid": "ae357409-f613-4933-a18a-37966c8378d9", 00:10:47.720 "is_configured": true, 00:10:47.720 "data_offset": 2048, 00:10:47.720 "data_size": 63488 00:10:47.720 }, 00:10:47.720 { 00:10:47.720 "name": "BaseBdev4", 00:10:47.720 "uuid": "40141276-3973-4312-8b85-53521149bcfb", 00:10:47.720 "is_configured": true, 00:10:47.720 "data_offset": 2048, 00:10:47.720 "data_size": 63488 00:10:47.720 } 00:10:47.720 ] 00:10:47.720 } 00:10:47.720 } 00:10:47.720 }' 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:47.720 BaseBdev2 00:10:47.720 BaseBdev3 00:10:47.720 BaseBdev4' 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.720 10:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.720 [2024-11-19 10:22:01.484239] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:47.720 [2024-11-19 10:22:01.484314] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:47.720 [2024-11-19 10:22:01.484401] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:47.979 10:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.979 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:47.979 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:47.979 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:47.979 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:47.979 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:47.979 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:47.979 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.979 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:47.979 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.979 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.979 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:47.979 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.979 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.979 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.979 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.979 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.979 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.979 10:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.979 10:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.979 10:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.979 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.979 "name": "Existed_Raid", 00:10:47.979 "uuid": "e4ad4a73-b923-498b-bd54-51c6e9899931", 00:10:47.979 "strip_size_kb": 64, 00:10:47.979 "state": "offline", 00:10:47.979 "raid_level": "concat", 00:10:47.979 "superblock": true, 00:10:47.979 "num_base_bdevs": 4, 00:10:47.979 "num_base_bdevs_discovered": 3, 00:10:47.979 "num_base_bdevs_operational": 3, 00:10:47.979 "base_bdevs_list": [ 00:10:47.979 { 00:10:47.979 "name": null, 00:10:47.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.979 "is_configured": false, 00:10:47.979 "data_offset": 0, 00:10:47.979 "data_size": 63488 00:10:47.979 }, 00:10:47.979 { 00:10:47.979 "name": "BaseBdev2", 00:10:47.979 "uuid": "ee93c1d3-8fc5-41ed-9520-0751fee47193", 00:10:47.979 "is_configured": true, 00:10:47.979 "data_offset": 2048, 00:10:47.979 "data_size": 63488 00:10:47.979 }, 00:10:47.979 { 00:10:47.979 "name": "BaseBdev3", 00:10:47.979 "uuid": "ae357409-f613-4933-a18a-37966c8378d9", 00:10:47.979 "is_configured": true, 00:10:47.979 "data_offset": 2048, 00:10:47.979 "data_size": 63488 00:10:47.979 }, 00:10:47.980 { 00:10:47.980 "name": "BaseBdev4", 00:10:47.980 "uuid": "40141276-3973-4312-8b85-53521149bcfb", 00:10:47.980 "is_configured": true, 00:10:47.980 "data_offset": 2048, 00:10:47.980 "data_size": 63488 00:10:47.980 } 00:10:47.980 ] 00:10:47.980 }' 00:10:47.980 10:22:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.980 10:22:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.548 [2024-11-19 10:22:02.066186] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.548 [2024-11-19 10:22:02.215930] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.548 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.807 [2024-11-19 10:22:02.365699] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:48.807 [2024-11-19 10:22:02.365793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.807 BaseBdev2 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.807 [ 00:10:48.807 { 00:10:48.807 "name": "BaseBdev2", 00:10:48.807 "aliases": [ 00:10:48.807 "1c7d9000-439a-4f62-868c-af1edd91b7a0" 00:10:48.807 ], 00:10:48.807 "product_name": "Malloc disk", 00:10:48.807 "block_size": 512, 00:10:48.807 "num_blocks": 65536, 00:10:48.807 "uuid": "1c7d9000-439a-4f62-868c-af1edd91b7a0", 00:10:48.807 "assigned_rate_limits": { 00:10:48.807 "rw_ios_per_sec": 0, 00:10:48.807 "rw_mbytes_per_sec": 0, 00:10:48.807 "r_mbytes_per_sec": 0, 00:10:48.807 "w_mbytes_per_sec": 0 00:10:48.807 }, 00:10:48.807 "claimed": false, 00:10:48.807 "zoned": false, 00:10:48.807 "supported_io_types": { 00:10:48.807 "read": true, 00:10:48.807 "write": true, 00:10:48.807 "unmap": true, 00:10:48.807 "flush": true, 00:10:48.807 "reset": true, 00:10:48.807 "nvme_admin": false, 00:10:48.807 "nvme_io": false, 00:10:48.807 "nvme_io_md": false, 00:10:48.807 "write_zeroes": true, 00:10:48.807 "zcopy": true, 00:10:48.807 "get_zone_info": false, 00:10:48.807 "zone_management": false, 00:10:48.807 "zone_append": false, 00:10:48.807 "compare": false, 00:10:48.807 "compare_and_write": false, 00:10:48.807 "abort": true, 00:10:48.807 "seek_hole": false, 00:10:48.807 "seek_data": false, 00:10:48.807 "copy": true, 00:10:48.807 "nvme_iov_md": false 00:10:48.807 }, 00:10:48.807 "memory_domains": [ 00:10:48.807 { 00:10:48.807 "dma_device_id": "system", 00:10:48.807 "dma_device_type": 1 00:10:48.807 }, 00:10:48.807 { 00:10:48.807 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.807 "dma_device_type": 2 00:10:48.807 } 00:10:48.807 ], 00:10:48.807 "driver_specific": {} 00:10:48.807 } 00:10:48.807 ] 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:48.807 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:48.808 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.808 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.067 BaseBdev3 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.068 [ 00:10:49.068 { 00:10:49.068 "name": "BaseBdev3", 00:10:49.068 "aliases": [ 00:10:49.068 "ffbeeee7-47e9-4bca-94ad-8e9405a82a08" 00:10:49.068 ], 00:10:49.068 "product_name": "Malloc disk", 00:10:49.068 "block_size": 512, 00:10:49.068 "num_blocks": 65536, 00:10:49.068 "uuid": "ffbeeee7-47e9-4bca-94ad-8e9405a82a08", 00:10:49.068 "assigned_rate_limits": { 00:10:49.068 "rw_ios_per_sec": 0, 00:10:49.068 "rw_mbytes_per_sec": 0, 00:10:49.068 "r_mbytes_per_sec": 0, 00:10:49.068 "w_mbytes_per_sec": 0 00:10:49.068 }, 00:10:49.068 "claimed": false, 00:10:49.068 "zoned": false, 00:10:49.068 "supported_io_types": { 00:10:49.068 "read": true, 00:10:49.068 "write": true, 00:10:49.068 "unmap": true, 00:10:49.068 "flush": true, 00:10:49.068 "reset": true, 00:10:49.068 "nvme_admin": false, 00:10:49.068 "nvme_io": false, 00:10:49.068 "nvme_io_md": false, 00:10:49.068 "write_zeroes": true, 00:10:49.068 "zcopy": true, 00:10:49.068 "get_zone_info": false, 00:10:49.068 "zone_management": false, 00:10:49.068 "zone_append": false, 00:10:49.068 "compare": false, 00:10:49.068 "compare_and_write": false, 00:10:49.068 "abort": true, 00:10:49.068 "seek_hole": false, 00:10:49.068 "seek_data": false, 00:10:49.068 "copy": true, 00:10:49.068 "nvme_iov_md": false 00:10:49.068 }, 00:10:49.068 "memory_domains": [ 00:10:49.068 { 00:10:49.068 "dma_device_id": "system", 00:10:49.068 "dma_device_type": 1 00:10:49.068 }, 00:10:49.068 { 00:10:49.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.068 "dma_device_type": 2 00:10:49.068 } 00:10:49.068 ], 00:10:49.068 "driver_specific": {} 00:10:49.068 } 00:10:49.068 ] 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.068 BaseBdev4 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.068 [ 00:10:49.068 { 00:10:49.068 "name": "BaseBdev4", 00:10:49.068 "aliases": [ 00:10:49.068 "fae92c2f-f05f-460b-a5cb-4cf38c0ad345" 00:10:49.068 ], 00:10:49.068 "product_name": "Malloc disk", 00:10:49.068 "block_size": 512, 00:10:49.068 "num_blocks": 65536, 00:10:49.068 "uuid": "fae92c2f-f05f-460b-a5cb-4cf38c0ad345", 00:10:49.068 "assigned_rate_limits": { 00:10:49.068 "rw_ios_per_sec": 0, 00:10:49.068 "rw_mbytes_per_sec": 0, 00:10:49.068 "r_mbytes_per_sec": 0, 00:10:49.068 "w_mbytes_per_sec": 0 00:10:49.068 }, 00:10:49.068 "claimed": false, 00:10:49.068 "zoned": false, 00:10:49.068 "supported_io_types": { 00:10:49.068 "read": true, 00:10:49.068 "write": true, 00:10:49.068 "unmap": true, 00:10:49.068 "flush": true, 00:10:49.068 "reset": true, 00:10:49.068 "nvme_admin": false, 00:10:49.068 "nvme_io": false, 00:10:49.068 "nvme_io_md": false, 00:10:49.068 "write_zeroes": true, 00:10:49.068 "zcopy": true, 00:10:49.068 "get_zone_info": false, 00:10:49.068 "zone_management": false, 00:10:49.068 "zone_append": false, 00:10:49.068 "compare": false, 00:10:49.068 "compare_and_write": false, 00:10:49.068 "abort": true, 00:10:49.068 "seek_hole": false, 00:10:49.068 "seek_data": false, 00:10:49.068 "copy": true, 00:10:49.068 "nvme_iov_md": false 00:10:49.068 }, 00:10:49.068 "memory_domains": [ 00:10:49.068 { 00:10:49.068 "dma_device_id": "system", 00:10:49.068 "dma_device_type": 1 00:10:49.068 }, 00:10:49.068 { 00:10:49.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.068 "dma_device_type": 2 00:10:49.068 } 00:10:49.068 ], 00:10:49.068 "driver_specific": {} 00:10:49.068 } 00:10:49.068 ] 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.068 [2024-11-19 10:22:02.718059] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:49.068 [2024-11-19 10:22:02.718149] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:49.068 [2024-11-19 10:22:02.718212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:49.068 [2024-11-19 10:22:02.720016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:49.068 [2024-11-19 10:22:02.720110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:49.068 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.069 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:49.069 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.069 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.069 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.069 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.069 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.069 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.069 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.069 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.069 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.069 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.069 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.069 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.069 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.069 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.069 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.069 "name": "Existed_Raid", 00:10:49.069 "uuid": "76209143-13b5-46d5-83d3-494eee1d0892", 00:10:49.069 "strip_size_kb": 64, 00:10:49.069 "state": "configuring", 00:10:49.069 "raid_level": "concat", 00:10:49.069 "superblock": true, 00:10:49.069 "num_base_bdevs": 4, 00:10:49.069 "num_base_bdevs_discovered": 3, 00:10:49.069 "num_base_bdevs_operational": 4, 00:10:49.069 "base_bdevs_list": [ 00:10:49.069 { 00:10:49.069 "name": "BaseBdev1", 00:10:49.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.069 "is_configured": false, 00:10:49.069 "data_offset": 0, 00:10:49.069 "data_size": 0 00:10:49.069 }, 00:10:49.069 { 00:10:49.069 "name": "BaseBdev2", 00:10:49.069 "uuid": "1c7d9000-439a-4f62-868c-af1edd91b7a0", 00:10:49.069 "is_configured": true, 00:10:49.069 "data_offset": 2048, 00:10:49.069 "data_size": 63488 00:10:49.069 }, 00:10:49.069 { 00:10:49.069 "name": "BaseBdev3", 00:10:49.069 "uuid": "ffbeeee7-47e9-4bca-94ad-8e9405a82a08", 00:10:49.069 "is_configured": true, 00:10:49.069 "data_offset": 2048, 00:10:49.069 "data_size": 63488 00:10:49.069 }, 00:10:49.069 { 00:10:49.069 "name": "BaseBdev4", 00:10:49.069 "uuid": "fae92c2f-f05f-460b-a5cb-4cf38c0ad345", 00:10:49.069 "is_configured": true, 00:10:49.069 "data_offset": 2048, 00:10:49.069 "data_size": 63488 00:10:49.069 } 00:10:49.069 ] 00:10:49.069 }' 00:10:49.069 10:22:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.069 10:22:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.638 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:49.638 10:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.638 10:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.638 [2024-11-19 10:22:03.137358] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:49.638 10:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.638 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:49.638 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.638 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.638 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.638 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.638 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.638 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.638 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.638 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.638 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.638 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.638 10:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.638 10:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.638 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.638 10:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.638 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.638 "name": "Existed_Raid", 00:10:49.638 "uuid": "76209143-13b5-46d5-83d3-494eee1d0892", 00:10:49.638 "strip_size_kb": 64, 00:10:49.638 "state": "configuring", 00:10:49.638 "raid_level": "concat", 00:10:49.638 "superblock": true, 00:10:49.638 "num_base_bdevs": 4, 00:10:49.638 "num_base_bdevs_discovered": 2, 00:10:49.638 "num_base_bdevs_operational": 4, 00:10:49.638 "base_bdevs_list": [ 00:10:49.638 { 00:10:49.638 "name": "BaseBdev1", 00:10:49.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.638 "is_configured": false, 00:10:49.638 "data_offset": 0, 00:10:49.638 "data_size": 0 00:10:49.638 }, 00:10:49.638 { 00:10:49.638 "name": null, 00:10:49.638 "uuid": "1c7d9000-439a-4f62-868c-af1edd91b7a0", 00:10:49.638 "is_configured": false, 00:10:49.638 "data_offset": 0, 00:10:49.638 "data_size": 63488 00:10:49.638 }, 00:10:49.638 { 00:10:49.638 "name": "BaseBdev3", 00:10:49.638 "uuid": "ffbeeee7-47e9-4bca-94ad-8e9405a82a08", 00:10:49.638 "is_configured": true, 00:10:49.638 "data_offset": 2048, 00:10:49.638 "data_size": 63488 00:10:49.638 }, 00:10:49.638 { 00:10:49.638 "name": "BaseBdev4", 00:10:49.638 "uuid": "fae92c2f-f05f-460b-a5cb-4cf38c0ad345", 00:10:49.638 "is_configured": true, 00:10:49.638 "data_offset": 2048, 00:10:49.638 "data_size": 63488 00:10:49.638 } 00:10:49.638 ] 00:10:49.638 }' 00:10:49.638 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.638 10:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.898 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.898 10:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.898 10:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.898 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:49.898 10:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.898 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:49.898 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:49.898 10:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.898 10:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.898 [2024-11-19 10:22:03.636531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:49.898 BaseBdev1 00:10:49.898 10:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.898 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:49.898 10:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:49.898 10:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.898 10:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:49.898 10:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.898 10:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.898 10:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.898 10:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.898 10:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.898 10:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.898 10:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:49.898 10:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.898 10:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.898 [ 00:10:49.898 { 00:10:49.898 "name": "BaseBdev1", 00:10:49.898 "aliases": [ 00:10:49.898 "53bca89d-91fa-4066-8a59-60ef56f3588c" 00:10:49.898 ], 00:10:49.898 "product_name": "Malloc disk", 00:10:49.898 "block_size": 512, 00:10:49.898 "num_blocks": 65536, 00:10:49.898 "uuid": "53bca89d-91fa-4066-8a59-60ef56f3588c", 00:10:49.898 "assigned_rate_limits": { 00:10:49.898 "rw_ios_per_sec": 0, 00:10:49.898 "rw_mbytes_per_sec": 0, 00:10:49.898 "r_mbytes_per_sec": 0, 00:10:49.898 "w_mbytes_per_sec": 0 00:10:49.898 }, 00:10:49.898 "claimed": true, 00:10:49.898 "claim_type": "exclusive_write", 00:10:49.898 "zoned": false, 00:10:49.898 "supported_io_types": { 00:10:49.898 "read": true, 00:10:49.898 "write": true, 00:10:49.898 "unmap": true, 00:10:49.898 "flush": true, 00:10:49.898 "reset": true, 00:10:49.898 "nvme_admin": false, 00:10:49.898 "nvme_io": false, 00:10:49.898 "nvme_io_md": false, 00:10:49.898 "write_zeroes": true, 00:10:49.898 "zcopy": true, 00:10:49.898 "get_zone_info": false, 00:10:49.898 "zone_management": false, 00:10:49.898 "zone_append": false, 00:10:49.898 "compare": false, 00:10:49.898 "compare_and_write": false, 00:10:49.898 "abort": true, 00:10:49.898 "seek_hole": false, 00:10:49.898 "seek_data": false, 00:10:49.898 "copy": true, 00:10:49.898 "nvme_iov_md": false 00:10:49.898 }, 00:10:49.898 "memory_domains": [ 00:10:49.898 { 00:10:49.898 "dma_device_id": "system", 00:10:49.898 "dma_device_type": 1 00:10:49.898 }, 00:10:49.898 { 00:10:49.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.899 "dma_device_type": 2 00:10:49.899 } 00:10:49.899 ], 00:10:49.899 "driver_specific": {} 00:10:49.899 } 00:10:49.899 ] 00:10:49.899 10:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.899 10:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:49.899 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:49.899 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.899 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.899 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.899 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.899 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.899 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.899 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.899 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.899 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.899 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.899 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.899 10:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.899 10:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.158 10:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.158 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.158 "name": "Existed_Raid", 00:10:50.158 "uuid": "76209143-13b5-46d5-83d3-494eee1d0892", 00:10:50.158 "strip_size_kb": 64, 00:10:50.158 "state": "configuring", 00:10:50.158 "raid_level": "concat", 00:10:50.158 "superblock": true, 00:10:50.158 "num_base_bdevs": 4, 00:10:50.158 "num_base_bdevs_discovered": 3, 00:10:50.158 "num_base_bdevs_operational": 4, 00:10:50.158 "base_bdevs_list": [ 00:10:50.158 { 00:10:50.158 "name": "BaseBdev1", 00:10:50.158 "uuid": "53bca89d-91fa-4066-8a59-60ef56f3588c", 00:10:50.158 "is_configured": true, 00:10:50.158 "data_offset": 2048, 00:10:50.158 "data_size": 63488 00:10:50.158 }, 00:10:50.158 { 00:10:50.158 "name": null, 00:10:50.158 "uuid": "1c7d9000-439a-4f62-868c-af1edd91b7a0", 00:10:50.158 "is_configured": false, 00:10:50.158 "data_offset": 0, 00:10:50.158 "data_size": 63488 00:10:50.158 }, 00:10:50.158 { 00:10:50.158 "name": "BaseBdev3", 00:10:50.158 "uuid": "ffbeeee7-47e9-4bca-94ad-8e9405a82a08", 00:10:50.158 "is_configured": true, 00:10:50.158 "data_offset": 2048, 00:10:50.158 "data_size": 63488 00:10:50.158 }, 00:10:50.158 { 00:10:50.158 "name": "BaseBdev4", 00:10:50.158 "uuid": "fae92c2f-f05f-460b-a5cb-4cf38c0ad345", 00:10:50.158 "is_configured": true, 00:10:50.158 "data_offset": 2048, 00:10:50.158 "data_size": 63488 00:10:50.158 } 00:10:50.158 ] 00:10:50.158 }' 00:10:50.158 10:22:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.158 10:22:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.418 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.418 10:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.418 10:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.418 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:50.418 10:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.418 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:50.418 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:50.418 10:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.418 10:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.418 [2024-11-19 10:22:04.155723] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:50.418 10:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.418 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:50.418 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.418 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.418 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.418 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.418 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.418 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.418 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.418 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.418 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.418 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.418 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.418 10:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.418 10:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.419 10:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.678 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.678 "name": "Existed_Raid", 00:10:50.678 "uuid": "76209143-13b5-46d5-83d3-494eee1d0892", 00:10:50.678 "strip_size_kb": 64, 00:10:50.678 "state": "configuring", 00:10:50.678 "raid_level": "concat", 00:10:50.678 "superblock": true, 00:10:50.678 "num_base_bdevs": 4, 00:10:50.678 "num_base_bdevs_discovered": 2, 00:10:50.678 "num_base_bdevs_operational": 4, 00:10:50.678 "base_bdevs_list": [ 00:10:50.678 { 00:10:50.678 "name": "BaseBdev1", 00:10:50.678 "uuid": "53bca89d-91fa-4066-8a59-60ef56f3588c", 00:10:50.678 "is_configured": true, 00:10:50.678 "data_offset": 2048, 00:10:50.678 "data_size": 63488 00:10:50.678 }, 00:10:50.678 { 00:10:50.678 "name": null, 00:10:50.678 "uuid": "1c7d9000-439a-4f62-868c-af1edd91b7a0", 00:10:50.678 "is_configured": false, 00:10:50.678 "data_offset": 0, 00:10:50.678 "data_size": 63488 00:10:50.678 }, 00:10:50.678 { 00:10:50.678 "name": null, 00:10:50.678 "uuid": "ffbeeee7-47e9-4bca-94ad-8e9405a82a08", 00:10:50.678 "is_configured": false, 00:10:50.678 "data_offset": 0, 00:10:50.678 "data_size": 63488 00:10:50.678 }, 00:10:50.678 { 00:10:50.678 "name": "BaseBdev4", 00:10:50.678 "uuid": "fae92c2f-f05f-460b-a5cb-4cf38c0ad345", 00:10:50.678 "is_configured": true, 00:10:50.678 "data_offset": 2048, 00:10:50.678 "data_size": 63488 00:10:50.678 } 00:10:50.678 ] 00:10:50.678 }' 00:10:50.678 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.678 10:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.938 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.938 10:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.938 10:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.938 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:50.938 10:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.938 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:50.938 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:50.938 10:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.938 10:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.938 [2024-11-19 10:22:04.658856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:50.938 10:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.938 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:50.938 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.938 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.938 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.938 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.938 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.938 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.938 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.938 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.938 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.938 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.938 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.938 10:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.939 10:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.939 10:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.939 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.939 "name": "Existed_Raid", 00:10:50.939 "uuid": "76209143-13b5-46d5-83d3-494eee1d0892", 00:10:50.939 "strip_size_kb": 64, 00:10:50.939 "state": "configuring", 00:10:50.939 "raid_level": "concat", 00:10:50.939 "superblock": true, 00:10:50.939 "num_base_bdevs": 4, 00:10:50.939 "num_base_bdevs_discovered": 3, 00:10:50.939 "num_base_bdevs_operational": 4, 00:10:50.939 "base_bdevs_list": [ 00:10:50.939 { 00:10:50.939 "name": "BaseBdev1", 00:10:50.939 "uuid": "53bca89d-91fa-4066-8a59-60ef56f3588c", 00:10:50.939 "is_configured": true, 00:10:50.939 "data_offset": 2048, 00:10:50.939 "data_size": 63488 00:10:50.939 }, 00:10:50.939 { 00:10:50.939 "name": null, 00:10:50.939 "uuid": "1c7d9000-439a-4f62-868c-af1edd91b7a0", 00:10:50.939 "is_configured": false, 00:10:50.939 "data_offset": 0, 00:10:50.939 "data_size": 63488 00:10:50.939 }, 00:10:50.939 { 00:10:50.939 "name": "BaseBdev3", 00:10:50.939 "uuid": "ffbeeee7-47e9-4bca-94ad-8e9405a82a08", 00:10:50.939 "is_configured": true, 00:10:50.939 "data_offset": 2048, 00:10:50.939 "data_size": 63488 00:10:50.939 }, 00:10:50.939 { 00:10:50.939 "name": "BaseBdev4", 00:10:50.939 "uuid": "fae92c2f-f05f-460b-a5cb-4cf38c0ad345", 00:10:50.939 "is_configured": true, 00:10:50.939 "data_offset": 2048, 00:10:50.939 "data_size": 63488 00:10:50.939 } 00:10:50.939 ] 00:10:50.939 }' 00:10:50.939 10:22:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.939 10:22:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.507 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:51.507 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.507 10:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.507 10:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.507 10:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.507 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:51.507 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:51.507 10:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.508 10:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.508 [2024-11-19 10:22:05.110137] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:51.508 10:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.508 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:51.508 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.508 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.508 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.508 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.508 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.508 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.508 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.508 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.508 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.508 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.508 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.508 10:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.508 10:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.508 10:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.508 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.508 "name": "Existed_Raid", 00:10:51.508 "uuid": "76209143-13b5-46d5-83d3-494eee1d0892", 00:10:51.508 "strip_size_kb": 64, 00:10:51.508 "state": "configuring", 00:10:51.508 "raid_level": "concat", 00:10:51.508 "superblock": true, 00:10:51.508 "num_base_bdevs": 4, 00:10:51.508 "num_base_bdevs_discovered": 2, 00:10:51.508 "num_base_bdevs_operational": 4, 00:10:51.508 "base_bdevs_list": [ 00:10:51.508 { 00:10:51.508 "name": null, 00:10:51.508 "uuid": "53bca89d-91fa-4066-8a59-60ef56f3588c", 00:10:51.508 "is_configured": false, 00:10:51.508 "data_offset": 0, 00:10:51.508 "data_size": 63488 00:10:51.508 }, 00:10:51.508 { 00:10:51.508 "name": null, 00:10:51.508 "uuid": "1c7d9000-439a-4f62-868c-af1edd91b7a0", 00:10:51.508 "is_configured": false, 00:10:51.508 "data_offset": 0, 00:10:51.508 "data_size": 63488 00:10:51.508 }, 00:10:51.508 { 00:10:51.508 "name": "BaseBdev3", 00:10:51.508 "uuid": "ffbeeee7-47e9-4bca-94ad-8e9405a82a08", 00:10:51.508 "is_configured": true, 00:10:51.508 "data_offset": 2048, 00:10:51.508 "data_size": 63488 00:10:51.508 }, 00:10:51.508 { 00:10:51.508 "name": "BaseBdev4", 00:10:51.508 "uuid": "fae92c2f-f05f-460b-a5cb-4cf38c0ad345", 00:10:51.508 "is_configured": true, 00:10:51.508 "data_offset": 2048, 00:10:51.508 "data_size": 63488 00:10:51.508 } 00:10:51.508 ] 00:10:51.508 }' 00:10:51.508 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.508 10:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.076 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.076 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:52.076 10:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.076 10:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.076 10:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.076 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:52.076 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:52.076 10:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.076 10:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.076 [2024-11-19 10:22:05.660315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:52.076 10:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.076 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:52.076 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.076 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.076 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.076 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.076 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.076 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.076 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.076 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.076 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.076 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.076 10:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.076 10:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.076 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.076 10:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.076 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.076 "name": "Existed_Raid", 00:10:52.076 "uuid": "76209143-13b5-46d5-83d3-494eee1d0892", 00:10:52.076 "strip_size_kb": 64, 00:10:52.076 "state": "configuring", 00:10:52.076 "raid_level": "concat", 00:10:52.076 "superblock": true, 00:10:52.076 "num_base_bdevs": 4, 00:10:52.076 "num_base_bdevs_discovered": 3, 00:10:52.076 "num_base_bdevs_operational": 4, 00:10:52.076 "base_bdevs_list": [ 00:10:52.076 { 00:10:52.076 "name": null, 00:10:52.076 "uuid": "53bca89d-91fa-4066-8a59-60ef56f3588c", 00:10:52.076 "is_configured": false, 00:10:52.076 "data_offset": 0, 00:10:52.077 "data_size": 63488 00:10:52.077 }, 00:10:52.077 { 00:10:52.077 "name": "BaseBdev2", 00:10:52.077 "uuid": "1c7d9000-439a-4f62-868c-af1edd91b7a0", 00:10:52.077 "is_configured": true, 00:10:52.077 "data_offset": 2048, 00:10:52.077 "data_size": 63488 00:10:52.077 }, 00:10:52.077 { 00:10:52.077 "name": "BaseBdev3", 00:10:52.077 "uuid": "ffbeeee7-47e9-4bca-94ad-8e9405a82a08", 00:10:52.077 "is_configured": true, 00:10:52.077 "data_offset": 2048, 00:10:52.077 "data_size": 63488 00:10:52.077 }, 00:10:52.077 { 00:10:52.077 "name": "BaseBdev4", 00:10:52.077 "uuid": "fae92c2f-f05f-460b-a5cb-4cf38c0ad345", 00:10:52.077 "is_configured": true, 00:10:52.077 "data_offset": 2048, 00:10:52.077 "data_size": 63488 00:10:52.077 } 00:10:52.077 ] 00:10:52.077 }' 00:10:52.077 10:22:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.077 10:22:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 53bca89d-91fa-4066-8a59-60ef56f3588c 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.647 [2024-11-19 10:22:06.244995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:52.647 [2024-11-19 10:22:06.245284] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:52.647 [2024-11-19 10:22:06.245300] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:52.647 NewBaseBdev 00:10:52.647 [2024-11-19 10:22:06.245582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:52.647 [2024-11-19 10:22:06.245737] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:52.647 [2024-11-19 10:22:06.245751] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.647 [2024-11-19 10:22:06.245885] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.647 [ 00:10:52.647 { 00:10:52.647 "name": "NewBaseBdev", 00:10:52.647 "aliases": [ 00:10:52.647 "53bca89d-91fa-4066-8a59-60ef56f3588c" 00:10:52.647 ], 00:10:52.647 "product_name": "Malloc disk", 00:10:52.647 "block_size": 512, 00:10:52.647 "num_blocks": 65536, 00:10:52.647 "uuid": "53bca89d-91fa-4066-8a59-60ef56f3588c", 00:10:52.647 "assigned_rate_limits": { 00:10:52.647 "rw_ios_per_sec": 0, 00:10:52.647 "rw_mbytes_per_sec": 0, 00:10:52.647 "r_mbytes_per_sec": 0, 00:10:52.647 "w_mbytes_per_sec": 0 00:10:52.647 }, 00:10:52.647 "claimed": true, 00:10:52.647 "claim_type": "exclusive_write", 00:10:52.647 "zoned": false, 00:10:52.647 "supported_io_types": { 00:10:52.647 "read": true, 00:10:52.647 "write": true, 00:10:52.647 "unmap": true, 00:10:52.647 "flush": true, 00:10:52.647 "reset": true, 00:10:52.647 "nvme_admin": false, 00:10:52.647 "nvme_io": false, 00:10:52.647 "nvme_io_md": false, 00:10:52.647 "write_zeroes": true, 00:10:52.647 "zcopy": true, 00:10:52.647 "get_zone_info": false, 00:10:52.647 "zone_management": false, 00:10:52.647 "zone_append": false, 00:10:52.647 "compare": false, 00:10:52.647 "compare_and_write": false, 00:10:52.647 "abort": true, 00:10:52.647 "seek_hole": false, 00:10:52.647 "seek_data": false, 00:10:52.647 "copy": true, 00:10:52.647 "nvme_iov_md": false 00:10:52.647 }, 00:10:52.647 "memory_domains": [ 00:10:52.647 { 00:10:52.647 "dma_device_id": "system", 00:10:52.647 "dma_device_type": 1 00:10:52.647 }, 00:10:52.647 { 00:10:52.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.647 "dma_device_type": 2 00:10:52.647 } 00:10:52.647 ], 00:10:52.647 "driver_specific": {} 00:10:52.647 } 00:10:52.647 ] 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.647 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.647 "name": "Existed_Raid", 00:10:52.647 "uuid": "76209143-13b5-46d5-83d3-494eee1d0892", 00:10:52.647 "strip_size_kb": 64, 00:10:52.647 "state": "online", 00:10:52.647 "raid_level": "concat", 00:10:52.647 "superblock": true, 00:10:52.647 "num_base_bdevs": 4, 00:10:52.647 "num_base_bdevs_discovered": 4, 00:10:52.647 "num_base_bdevs_operational": 4, 00:10:52.647 "base_bdevs_list": [ 00:10:52.647 { 00:10:52.647 "name": "NewBaseBdev", 00:10:52.647 "uuid": "53bca89d-91fa-4066-8a59-60ef56f3588c", 00:10:52.647 "is_configured": true, 00:10:52.647 "data_offset": 2048, 00:10:52.647 "data_size": 63488 00:10:52.647 }, 00:10:52.647 { 00:10:52.647 "name": "BaseBdev2", 00:10:52.647 "uuid": "1c7d9000-439a-4f62-868c-af1edd91b7a0", 00:10:52.647 "is_configured": true, 00:10:52.647 "data_offset": 2048, 00:10:52.647 "data_size": 63488 00:10:52.647 }, 00:10:52.647 { 00:10:52.647 "name": "BaseBdev3", 00:10:52.647 "uuid": "ffbeeee7-47e9-4bca-94ad-8e9405a82a08", 00:10:52.647 "is_configured": true, 00:10:52.647 "data_offset": 2048, 00:10:52.647 "data_size": 63488 00:10:52.647 }, 00:10:52.647 { 00:10:52.647 "name": "BaseBdev4", 00:10:52.647 "uuid": "fae92c2f-f05f-460b-a5cb-4cf38c0ad345", 00:10:52.647 "is_configured": true, 00:10:52.647 "data_offset": 2048, 00:10:52.647 "data_size": 63488 00:10:52.648 } 00:10:52.648 ] 00:10:52.648 }' 00:10:52.648 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.648 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.216 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:53.216 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:53.216 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:53.216 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:53.216 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:53.216 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:53.216 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:53.216 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:53.216 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.216 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.216 [2024-11-19 10:22:06.700618] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:53.216 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.216 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:53.216 "name": "Existed_Raid", 00:10:53.216 "aliases": [ 00:10:53.216 "76209143-13b5-46d5-83d3-494eee1d0892" 00:10:53.216 ], 00:10:53.216 "product_name": "Raid Volume", 00:10:53.216 "block_size": 512, 00:10:53.216 "num_blocks": 253952, 00:10:53.216 "uuid": "76209143-13b5-46d5-83d3-494eee1d0892", 00:10:53.216 "assigned_rate_limits": { 00:10:53.216 "rw_ios_per_sec": 0, 00:10:53.216 "rw_mbytes_per_sec": 0, 00:10:53.216 "r_mbytes_per_sec": 0, 00:10:53.216 "w_mbytes_per_sec": 0 00:10:53.216 }, 00:10:53.216 "claimed": false, 00:10:53.216 "zoned": false, 00:10:53.216 "supported_io_types": { 00:10:53.216 "read": true, 00:10:53.216 "write": true, 00:10:53.216 "unmap": true, 00:10:53.216 "flush": true, 00:10:53.216 "reset": true, 00:10:53.216 "nvme_admin": false, 00:10:53.216 "nvme_io": false, 00:10:53.216 "nvme_io_md": false, 00:10:53.216 "write_zeroes": true, 00:10:53.216 "zcopy": false, 00:10:53.216 "get_zone_info": false, 00:10:53.216 "zone_management": false, 00:10:53.216 "zone_append": false, 00:10:53.216 "compare": false, 00:10:53.216 "compare_and_write": false, 00:10:53.216 "abort": false, 00:10:53.216 "seek_hole": false, 00:10:53.216 "seek_data": false, 00:10:53.216 "copy": false, 00:10:53.216 "nvme_iov_md": false 00:10:53.216 }, 00:10:53.216 "memory_domains": [ 00:10:53.216 { 00:10:53.216 "dma_device_id": "system", 00:10:53.216 "dma_device_type": 1 00:10:53.216 }, 00:10:53.216 { 00:10:53.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.216 "dma_device_type": 2 00:10:53.216 }, 00:10:53.216 { 00:10:53.216 "dma_device_id": "system", 00:10:53.216 "dma_device_type": 1 00:10:53.216 }, 00:10:53.216 { 00:10:53.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.216 "dma_device_type": 2 00:10:53.216 }, 00:10:53.216 { 00:10:53.216 "dma_device_id": "system", 00:10:53.216 "dma_device_type": 1 00:10:53.216 }, 00:10:53.216 { 00:10:53.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.216 "dma_device_type": 2 00:10:53.216 }, 00:10:53.216 { 00:10:53.216 "dma_device_id": "system", 00:10:53.216 "dma_device_type": 1 00:10:53.216 }, 00:10:53.216 { 00:10:53.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.216 "dma_device_type": 2 00:10:53.216 } 00:10:53.216 ], 00:10:53.216 "driver_specific": { 00:10:53.216 "raid": { 00:10:53.216 "uuid": "76209143-13b5-46d5-83d3-494eee1d0892", 00:10:53.216 "strip_size_kb": 64, 00:10:53.216 "state": "online", 00:10:53.216 "raid_level": "concat", 00:10:53.217 "superblock": true, 00:10:53.217 "num_base_bdevs": 4, 00:10:53.217 "num_base_bdevs_discovered": 4, 00:10:53.217 "num_base_bdevs_operational": 4, 00:10:53.217 "base_bdevs_list": [ 00:10:53.217 { 00:10:53.217 "name": "NewBaseBdev", 00:10:53.217 "uuid": "53bca89d-91fa-4066-8a59-60ef56f3588c", 00:10:53.217 "is_configured": true, 00:10:53.217 "data_offset": 2048, 00:10:53.217 "data_size": 63488 00:10:53.217 }, 00:10:53.217 { 00:10:53.217 "name": "BaseBdev2", 00:10:53.217 "uuid": "1c7d9000-439a-4f62-868c-af1edd91b7a0", 00:10:53.217 "is_configured": true, 00:10:53.217 "data_offset": 2048, 00:10:53.217 "data_size": 63488 00:10:53.217 }, 00:10:53.217 { 00:10:53.217 "name": "BaseBdev3", 00:10:53.217 "uuid": "ffbeeee7-47e9-4bca-94ad-8e9405a82a08", 00:10:53.217 "is_configured": true, 00:10:53.217 "data_offset": 2048, 00:10:53.217 "data_size": 63488 00:10:53.217 }, 00:10:53.217 { 00:10:53.217 "name": "BaseBdev4", 00:10:53.217 "uuid": "fae92c2f-f05f-460b-a5cb-4cf38c0ad345", 00:10:53.217 "is_configured": true, 00:10:53.217 "data_offset": 2048, 00:10:53.217 "data_size": 63488 00:10:53.217 } 00:10:53.217 ] 00:10:53.217 } 00:10:53.217 } 00:10:53.217 }' 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:53.217 BaseBdev2 00:10:53.217 BaseBdev3 00:10:53.217 BaseBdev4' 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.217 10:22:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.476 10:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.476 10:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.476 10:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.476 10:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:53.476 10:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.476 10:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.476 [2024-11-19 10:22:07.031667] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:53.476 [2024-11-19 10:22:07.031698] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:53.476 [2024-11-19 10:22:07.031774] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:53.476 [2024-11-19 10:22:07.031845] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:53.476 [2024-11-19 10:22:07.031855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:53.476 10:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.476 10:22:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71723 00:10:53.476 10:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71723 ']' 00:10:53.476 10:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71723 00:10:53.476 10:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:53.476 10:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:53.476 10:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71723 00:10:53.476 killing process with pid 71723 00:10:53.476 10:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:53.476 10:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:53.476 10:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71723' 00:10:53.476 10:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71723 00:10:53.476 10:22:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71723 00:10:53.476 [2024-11-19 10:22:07.072299] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:53.734 [2024-11-19 10:22:07.470704] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:55.161 10:22:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:55.161 00:10:55.161 real 0m11.154s 00:10:55.161 user 0m17.777s 00:10:55.161 sys 0m1.856s 00:10:55.161 10:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.161 10:22:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.161 ************************************ 00:10:55.161 END TEST raid_state_function_test_sb 00:10:55.161 ************************************ 00:10:55.161 10:22:08 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:10:55.161 10:22:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:55.161 10:22:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.161 10:22:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:55.161 ************************************ 00:10:55.161 START TEST raid_superblock_test 00:10:55.161 ************************************ 00:10:55.161 10:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:10:55.161 10:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:55.161 10:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:55.161 10:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:55.161 10:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:55.161 10:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:55.161 10:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:55.161 10:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:55.161 10:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:55.161 10:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:55.161 10:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:55.161 10:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:55.161 10:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:55.161 10:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:55.161 10:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:55.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.161 10:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:55.161 10:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:55.161 10:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72395 00:10:55.161 10:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72395 00:10:55.161 10:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72395 ']' 00:10:55.161 10:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.161 10:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.161 10:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.161 10:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.161 10:22:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.161 10:22:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:55.161 [2024-11-19 10:22:08.736971] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:10:55.161 [2024-11-19 10:22:08.737178] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72395 ] 00:10:55.161 [2024-11-19 10:22:08.912382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.419 [2024-11-19 10:22:09.027867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.677 [2024-11-19 10:22:09.238286] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.678 [2024-11-19 10:22:09.238420] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.936 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:55.936 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:55.936 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:55.936 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:55.936 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:55.936 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:55.936 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:55.936 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:55.936 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:55.936 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:55.936 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:55.936 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.936 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.936 malloc1 00:10:55.936 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.936 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:55.936 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.936 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.936 [2024-11-19 10:22:09.614389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:55.936 [2024-11-19 10:22:09.614523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.936 [2024-11-19 10:22:09.614566] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:55.936 [2024-11-19 10:22:09.614608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.936 [2024-11-19 10:22:09.616732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.937 [2024-11-19 10:22:09.616807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:55.937 pt1 00:10:55.937 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.937 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:55.937 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:55.937 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:55.937 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:55.937 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:55.937 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:55.937 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:55.937 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:55.937 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:55.937 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.937 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.937 malloc2 00:10:55.937 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.937 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:55.937 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.937 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.937 [2024-11-19 10:22:09.670415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:55.937 [2024-11-19 10:22:09.670472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.937 [2024-11-19 10:22:09.670510] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:55.937 [2024-11-19 10:22:09.670519] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.937 [2024-11-19 10:22:09.672710] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.937 [2024-11-19 10:22:09.672747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:55.937 pt2 00:10:55.937 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.937 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:55.937 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:55.937 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:55.937 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:55.937 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:55.937 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:55.937 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:55.937 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:55.937 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:55.937 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.937 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.195 malloc3 00:10:56.195 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.195 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:56.195 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.195 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.195 [2024-11-19 10:22:09.731232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:56.195 [2024-11-19 10:22:09.731287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.195 [2024-11-19 10:22:09.731308] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:56.195 [2024-11-19 10:22:09.731318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.195 [2024-11-19 10:22:09.733540] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.195 [2024-11-19 10:22:09.733577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:56.195 pt3 00:10:56.195 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.195 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:56.195 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:56.195 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:56.195 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:56.195 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:56.195 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:56.195 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:56.195 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:56.195 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:56.195 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.195 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.195 malloc4 00:10:56.195 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.195 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:56.195 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.196 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.196 [2024-11-19 10:22:09.784376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:56.196 [2024-11-19 10:22:09.784473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.196 [2024-11-19 10:22:09.784496] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:56.196 [2024-11-19 10:22:09.784505] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.196 [2024-11-19 10:22:09.786592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.196 [2024-11-19 10:22:09.786663] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:56.196 pt4 00:10:56.196 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.196 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:56.196 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:56.196 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:56.196 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.196 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.196 [2024-11-19 10:22:09.796393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:56.196 [2024-11-19 10:22:09.798243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:56.196 [2024-11-19 10:22:09.798349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:56.196 [2024-11-19 10:22:09.798417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:56.196 [2024-11-19 10:22:09.798603] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:56.196 [2024-11-19 10:22:09.798616] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:56.196 [2024-11-19 10:22:09.798860] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:56.196 [2024-11-19 10:22:09.799039] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:56.196 [2024-11-19 10:22:09.799054] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:56.196 [2024-11-19 10:22:09.799233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.196 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.196 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:56.196 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:56.196 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.196 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:56.196 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.196 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.196 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.196 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.196 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.196 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.196 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.196 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.196 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.196 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.196 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.196 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.196 "name": "raid_bdev1", 00:10:56.196 "uuid": "1dd99be1-5937-41e0-b983-64ec19451a81", 00:10:56.196 "strip_size_kb": 64, 00:10:56.196 "state": "online", 00:10:56.196 "raid_level": "concat", 00:10:56.196 "superblock": true, 00:10:56.196 "num_base_bdevs": 4, 00:10:56.196 "num_base_bdevs_discovered": 4, 00:10:56.196 "num_base_bdevs_operational": 4, 00:10:56.196 "base_bdevs_list": [ 00:10:56.196 { 00:10:56.196 "name": "pt1", 00:10:56.196 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:56.196 "is_configured": true, 00:10:56.196 "data_offset": 2048, 00:10:56.196 "data_size": 63488 00:10:56.196 }, 00:10:56.196 { 00:10:56.196 "name": "pt2", 00:10:56.196 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:56.196 "is_configured": true, 00:10:56.196 "data_offset": 2048, 00:10:56.196 "data_size": 63488 00:10:56.196 }, 00:10:56.196 { 00:10:56.196 "name": "pt3", 00:10:56.196 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:56.196 "is_configured": true, 00:10:56.196 "data_offset": 2048, 00:10:56.196 "data_size": 63488 00:10:56.196 }, 00:10:56.196 { 00:10:56.196 "name": "pt4", 00:10:56.196 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:56.196 "is_configured": true, 00:10:56.196 "data_offset": 2048, 00:10:56.196 "data_size": 63488 00:10:56.196 } 00:10:56.196 ] 00:10:56.196 }' 00:10:56.196 10:22:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.196 10:22:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.762 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:56.762 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:56.762 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:56.762 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:56.762 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:56.762 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:56.762 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:56.762 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:56.762 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.762 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.762 [2024-11-19 10:22:10.248037] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:56.762 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.762 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:56.762 "name": "raid_bdev1", 00:10:56.762 "aliases": [ 00:10:56.762 "1dd99be1-5937-41e0-b983-64ec19451a81" 00:10:56.762 ], 00:10:56.762 "product_name": "Raid Volume", 00:10:56.762 "block_size": 512, 00:10:56.762 "num_blocks": 253952, 00:10:56.762 "uuid": "1dd99be1-5937-41e0-b983-64ec19451a81", 00:10:56.762 "assigned_rate_limits": { 00:10:56.762 "rw_ios_per_sec": 0, 00:10:56.762 "rw_mbytes_per_sec": 0, 00:10:56.762 "r_mbytes_per_sec": 0, 00:10:56.762 "w_mbytes_per_sec": 0 00:10:56.762 }, 00:10:56.762 "claimed": false, 00:10:56.762 "zoned": false, 00:10:56.762 "supported_io_types": { 00:10:56.762 "read": true, 00:10:56.762 "write": true, 00:10:56.762 "unmap": true, 00:10:56.762 "flush": true, 00:10:56.762 "reset": true, 00:10:56.762 "nvme_admin": false, 00:10:56.762 "nvme_io": false, 00:10:56.762 "nvme_io_md": false, 00:10:56.762 "write_zeroes": true, 00:10:56.762 "zcopy": false, 00:10:56.762 "get_zone_info": false, 00:10:56.762 "zone_management": false, 00:10:56.762 "zone_append": false, 00:10:56.762 "compare": false, 00:10:56.762 "compare_and_write": false, 00:10:56.762 "abort": false, 00:10:56.763 "seek_hole": false, 00:10:56.763 "seek_data": false, 00:10:56.763 "copy": false, 00:10:56.763 "nvme_iov_md": false 00:10:56.763 }, 00:10:56.763 "memory_domains": [ 00:10:56.763 { 00:10:56.763 "dma_device_id": "system", 00:10:56.763 "dma_device_type": 1 00:10:56.763 }, 00:10:56.763 { 00:10:56.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.763 "dma_device_type": 2 00:10:56.763 }, 00:10:56.763 { 00:10:56.763 "dma_device_id": "system", 00:10:56.763 "dma_device_type": 1 00:10:56.763 }, 00:10:56.763 { 00:10:56.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.763 "dma_device_type": 2 00:10:56.763 }, 00:10:56.763 { 00:10:56.763 "dma_device_id": "system", 00:10:56.763 "dma_device_type": 1 00:10:56.763 }, 00:10:56.763 { 00:10:56.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.763 "dma_device_type": 2 00:10:56.763 }, 00:10:56.763 { 00:10:56.763 "dma_device_id": "system", 00:10:56.763 "dma_device_type": 1 00:10:56.763 }, 00:10:56.763 { 00:10:56.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.763 "dma_device_type": 2 00:10:56.763 } 00:10:56.763 ], 00:10:56.763 "driver_specific": { 00:10:56.763 "raid": { 00:10:56.763 "uuid": "1dd99be1-5937-41e0-b983-64ec19451a81", 00:10:56.763 "strip_size_kb": 64, 00:10:56.763 "state": "online", 00:10:56.763 "raid_level": "concat", 00:10:56.763 "superblock": true, 00:10:56.763 "num_base_bdevs": 4, 00:10:56.763 "num_base_bdevs_discovered": 4, 00:10:56.763 "num_base_bdevs_operational": 4, 00:10:56.763 "base_bdevs_list": [ 00:10:56.763 { 00:10:56.763 "name": "pt1", 00:10:56.763 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:56.763 "is_configured": true, 00:10:56.763 "data_offset": 2048, 00:10:56.763 "data_size": 63488 00:10:56.763 }, 00:10:56.763 { 00:10:56.763 "name": "pt2", 00:10:56.763 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:56.763 "is_configured": true, 00:10:56.763 "data_offset": 2048, 00:10:56.763 "data_size": 63488 00:10:56.763 }, 00:10:56.763 { 00:10:56.763 "name": "pt3", 00:10:56.763 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:56.763 "is_configured": true, 00:10:56.763 "data_offset": 2048, 00:10:56.763 "data_size": 63488 00:10:56.763 }, 00:10:56.763 { 00:10:56.763 "name": "pt4", 00:10:56.763 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:56.763 "is_configured": true, 00:10:56.763 "data_offset": 2048, 00:10:56.763 "data_size": 63488 00:10:56.763 } 00:10:56.763 ] 00:10:56.763 } 00:10:56.763 } 00:10:56.763 }' 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:56.763 pt2 00:10:56.763 pt3 00:10:56.763 pt4' 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:56.763 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.022 [2024-11-19 10:22:10.547426] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1dd99be1-5937-41e0-b983-64ec19451a81 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1dd99be1-5937-41e0-b983-64ec19451a81 ']' 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.022 [2024-11-19 10:22:10.579104] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:57.022 [2024-11-19 10:22:10.579127] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:57.022 [2024-11-19 10:22:10.579206] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.022 [2024-11-19 10:22:10.579277] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:57.022 [2024-11-19 10:22:10.579290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:57.022 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.023 [2024-11-19 10:22:10.718894] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:57.023 [2024-11-19 10:22:10.720713] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:57.023 [2024-11-19 10:22:10.720761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:57.023 [2024-11-19 10:22:10.720793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:57.023 [2024-11-19 10:22:10.720843] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:57.023 [2024-11-19 10:22:10.720892] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:57.023 [2024-11-19 10:22:10.720912] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:57.023 [2024-11-19 10:22:10.720929] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:57.023 [2024-11-19 10:22:10.720942] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:57.023 [2024-11-19 10:22:10.720952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:57.023 request: 00:10:57.023 { 00:10:57.023 "name": "raid_bdev1", 00:10:57.023 "raid_level": "concat", 00:10:57.023 "base_bdevs": [ 00:10:57.023 "malloc1", 00:10:57.023 "malloc2", 00:10:57.023 "malloc3", 00:10:57.023 "malloc4" 00:10:57.023 ], 00:10:57.023 "strip_size_kb": 64, 00:10:57.023 "superblock": false, 00:10:57.023 "method": "bdev_raid_create", 00:10:57.023 "req_id": 1 00:10:57.023 } 00:10:57.023 Got JSON-RPC error response 00:10:57.023 response: 00:10:57.023 { 00:10:57.023 "code": -17, 00:10:57.023 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:57.023 } 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.023 [2024-11-19 10:22:10.778762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:57.023 [2024-11-19 10:22:10.778857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.023 [2024-11-19 10:22:10.778900] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:57.023 [2024-11-19 10:22:10.778932] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.023 [2024-11-19 10:22:10.781071] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.023 [2024-11-19 10:22:10.781144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:57.023 [2024-11-19 10:22:10.781241] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:57.023 [2024-11-19 10:22:10.781335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:57.023 pt1 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.023 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.282 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.282 "name": "raid_bdev1", 00:10:57.282 "uuid": "1dd99be1-5937-41e0-b983-64ec19451a81", 00:10:57.282 "strip_size_kb": 64, 00:10:57.282 "state": "configuring", 00:10:57.282 "raid_level": "concat", 00:10:57.282 "superblock": true, 00:10:57.282 "num_base_bdevs": 4, 00:10:57.282 "num_base_bdevs_discovered": 1, 00:10:57.282 "num_base_bdevs_operational": 4, 00:10:57.282 "base_bdevs_list": [ 00:10:57.282 { 00:10:57.282 "name": "pt1", 00:10:57.282 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:57.282 "is_configured": true, 00:10:57.282 "data_offset": 2048, 00:10:57.282 "data_size": 63488 00:10:57.282 }, 00:10:57.282 { 00:10:57.282 "name": null, 00:10:57.282 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:57.282 "is_configured": false, 00:10:57.282 "data_offset": 2048, 00:10:57.282 "data_size": 63488 00:10:57.282 }, 00:10:57.282 { 00:10:57.282 "name": null, 00:10:57.282 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:57.282 "is_configured": false, 00:10:57.282 "data_offset": 2048, 00:10:57.282 "data_size": 63488 00:10:57.282 }, 00:10:57.282 { 00:10:57.282 "name": null, 00:10:57.282 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:57.282 "is_configured": false, 00:10:57.282 "data_offset": 2048, 00:10:57.282 "data_size": 63488 00:10:57.282 } 00:10:57.282 ] 00:10:57.282 }' 00:10:57.282 10:22:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.282 10:22:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.540 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:57.540 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:57.540 10:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.540 10:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.540 [2024-11-19 10:22:11.230028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:57.540 [2024-11-19 10:22:11.230097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.540 [2024-11-19 10:22:11.230116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:57.540 [2024-11-19 10:22:11.230127] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.540 [2024-11-19 10:22:11.230553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.540 [2024-11-19 10:22:11.230573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:57.540 [2024-11-19 10:22:11.230650] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:57.540 [2024-11-19 10:22:11.230674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:57.540 pt2 00:10:57.540 10:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.540 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:57.540 10:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.540 10:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.540 [2024-11-19 10:22:11.238016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:57.540 10:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.540 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:57.540 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:57.540 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.540 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.540 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.540 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.540 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.540 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.540 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.540 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.540 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.540 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.540 10:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.540 10:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.540 10:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.540 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.540 "name": "raid_bdev1", 00:10:57.540 "uuid": "1dd99be1-5937-41e0-b983-64ec19451a81", 00:10:57.540 "strip_size_kb": 64, 00:10:57.540 "state": "configuring", 00:10:57.540 "raid_level": "concat", 00:10:57.540 "superblock": true, 00:10:57.540 "num_base_bdevs": 4, 00:10:57.540 "num_base_bdevs_discovered": 1, 00:10:57.540 "num_base_bdevs_operational": 4, 00:10:57.540 "base_bdevs_list": [ 00:10:57.540 { 00:10:57.541 "name": "pt1", 00:10:57.541 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:57.541 "is_configured": true, 00:10:57.541 "data_offset": 2048, 00:10:57.541 "data_size": 63488 00:10:57.541 }, 00:10:57.541 { 00:10:57.541 "name": null, 00:10:57.541 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:57.541 "is_configured": false, 00:10:57.541 "data_offset": 0, 00:10:57.541 "data_size": 63488 00:10:57.541 }, 00:10:57.541 { 00:10:57.541 "name": null, 00:10:57.541 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:57.541 "is_configured": false, 00:10:57.541 "data_offset": 2048, 00:10:57.541 "data_size": 63488 00:10:57.541 }, 00:10:57.541 { 00:10:57.541 "name": null, 00:10:57.541 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:57.541 "is_configured": false, 00:10:57.541 "data_offset": 2048, 00:10:57.541 "data_size": 63488 00:10:57.541 } 00:10:57.541 ] 00:10:57.541 }' 00:10:57.541 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.541 10:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.107 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:58.107 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:58.107 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:58.107 10:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.107 10:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.107 [2024-11-19 10:22:11.665268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:58.107 [2024-11-19 10:22:11.665329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.107 [2024-11-19 10:22:11.665350] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:58.107 [2024-11-19 10:22:11.665359] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.107 [2024-11-19 10:22:11.665792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.107 [2024-11-19 10:22:11.665810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:58.107 [2024-11-19 10:22:11.665888] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:58.107 [2024-11-19 10:22:11.665909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:58.107 pt2 00:10:58.107 10:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.107 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:58.107 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:58.107 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:58.107 10:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.107 10:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.107 [2024-11-19 10:22:11.673227] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:58.107 [2024-11-19 10:22:11.673318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.107 [2024-11-19 10:22:11.673361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:58.107 [2024-11-19 10:22:11.673392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.107 [2024-11-19 10:22:11.673766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.107 [2024-11-19 10:22:11.673819] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:58.107 [2024-11-19 10:22:11.673911] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:58.107 [2024-11-19 10:22:11.673956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:58.107 pt3 00:10:58.107 10:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.107 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:58.107 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:58.107 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:58.107 10:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.107 10:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.107 [2024-11-19 10:22:11.681184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:58.107 [2024-11-19 10:22:11.681267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.107 [2024-11-19 10:22:11.681309] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:58.107 [2024-11-19 10:22:11.681337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.107 [2024-11-19 10:22:11.681721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.107 [2024-11-19 10:22:11.681773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:58.107 [2024-11-19 10:22:11.681856] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:58.107 [2024-11-19 10:22:11.681877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:58.107 [2024-11-19 10:22:11.682024] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:58.107 [2024-11-19 10:22:11.682033] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:58.107 [2024-11-19 10:22:11.682261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:58.108 [2024-11-19 10:22:11.682403] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:58.108 [2024-11-19 10:22:11.682416] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:58.108 [2024-11-19 10:22:11.682544] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.108 pt4 00:10:58.108 10:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.108 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:58.108 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:58.108 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:58.108 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.108 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.108 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.108 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.108 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.108 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.108 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.108 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.108 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.108 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.108 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.108 10:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.108 10:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.108 10:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.108 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.108 "name": "raid_bdev1", 00:10:58.108 "uuid": "1dd99be1-5937-41e0-b983-64ec19451a81", 00:10:58.108 "strip_size_kb": 64, 00:10:58.108 "state": "online", 00:10:58.108 "raid_level": "concat", 00:10:58.108 "superblock": true, 00:10:58.108 "num_base_bdevs": 4, 00:10:58.108 "num_base_bdevs_discovered": 4, 00:10:58.108 "num_base_bdevs_operational": 4, 00:10:58.108 "base_bdevs_list": [ 00:10:58.108 { 00:10:58.108 "name": "pt1", 00:10:58.108 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:58.108 "is_configured": true, 00:10:58.108 "data_offset": 2048, 00:10:58.108 "data_size": 63488 00:10:58.108 }, 00:10:58.108 { 00:10:58.108 "name": "pt2", 00:10:58.108 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:58.108 "is_configured": true, 00:10:58.108 "data_offset": 2048, 00:10:58.108 "data_size": 63488 00:10:58.108 }, 00:10:58.108 { 00:10:58.108 "name": "pt3", 00:10:58.108 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:58.108 "is_configured": true, 00:10:58.108 "data_offset": 2048, 00:10:58.108 "data_size": 63488 00:10:58.108 }, 00:10:58.108 { 00:10:58.108 "name": "pt4", 00:10:58.108 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:58.108 "is_configured": true, 00:10:58.108 "data_offset": 2048, 00:10:58.108 "data_size": 63488 00:10:58.108 } 00:10:58.108 ] 00:10:58.108 }' 00:10:58.108 10:22:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.108 10:22:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.367 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:58.367 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:58.367 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:58.367 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:58.367 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:58.367 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:58.367 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:58.367 10:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.367 10:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.367 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:58.367 [2024-11-19 10:22:12.084863] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:58.367 10:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.367 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:58.367 "name": "raid_bdev1", 00:10:58.367 "aliases": [ 00:10:58.367 "1dd99be1-5937-41e0-b983-64ec19451a81" 00:10:58.367 ], 00:10:58.367 "product_name": "Raid Volume", 00:10:58.367 "block_size": 512, 00:10:58.367 "num_blocks": 253952, 00:10:58.367 "uuid": "1dd99be1-5937-41e0-b983-64ec19451a81", 00:10:58.367 "assigned_rate_limits": { 00:10:58.367 "rw_ios_per_sec": 0, 00:10:58.367 "rw_mbytes_per_sec": 0, 00:10:58.367 "r_mbytes_per_sec": 0, 00:10:58.367 "w_mbytes_per_sec": 0 00:10:58.367 }, 00:10:58.367 "claimed": false, 00:10:58.367 "zoned": false, 00:10:58.367 "supported_io_types": { 00:10:58.367 "read": true, 00:10:58.367 "write": true, 00:10:58.367 "unmap": true, 00:10:58.367 "flush": true, 00:10:58.367 "reset": true, 00:10:58.367 "nvme_admin": false, 00:10:58.367 "nvme_io": false, 00:10:58.367 "nvme_io_md": false, 00:10:58.367 "write_zeroes": true, 00:10:58.367 "zcopy": false, 00:10:58.367 "get_zone_info": false, 00:10:58.367 "zone_management": false, 00:10:58.367 "zone_append": false, 00:10:58.367 "compare": false, 00:10:58.367 "compare_and_write": false, 00:10:58.367 "abort": false, 00:10:58.367 "seek_hole": false, 00:10:58.367 "seek_data": false, 00:10:58.367 "copy": false, 00:10:58.367 "nvme_iov_md": false 00:10:58.367 }, 00:10:58.367 "memory_domains": [ 00:10:58.367 { 00:10:58.367 "dma_device_id": "system", 00:10:58.367 "dma_device_type": 1 00:10:58.367 }, 00:10:58.367 { 00:10:58.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.367 "dma_device_type": 2 00:10:58.367 }, 00:10:58.367 { 00:10:58.367 "dma_device_id": "system", 00:10:58.367 "dma_device_type": 1 00:10:58.367 }, 00:10:58.367 { 00:10:58.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.367 "dma_device_type": 2 00:10:58.367 }, 00:10:58.367 { 00:10:58.367 "dma_device_id": "system", 00:10:58.367 "dma_device_type": 1 00:10:58.367 }, 00:10:58.367 { 00:10:58.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.367 "dma_device_type": 2 00:10:58.367 }, 00:10:58.367 { 00:10:58.367 "dma_device_id": "system", 00:10:58.367 "dma_device_type": 1 00:10:58.367 }, 00:10:58.367 { 00:10:58.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.367 "dma_device_type": 2 00:10:58.367 } 00:10:58.367 ], 00:10:58.367 "driver_specific": { 00:10:58.367 "raid": { 00:10:58.367 "uuid": "1dd99be1-5937-41e0-b983-64ec19451a81", 00:10:58.367 "strip_size_kb": 64, 00:10:58.367 "state": "online", 00:10:58.367 "raid_level": "concat", 00:10:58.367 "superblock": true, 00:10:58.367 "num_base_bdevs": 4, 00:10:58.367 "num_base_bdevs_discovered": 4, 00:10:58.367 "num_base_bdevs_operational": 4, 00:10:58.367 "base_bdevs_list": [ 00:10:58.367 { 00:10:58.367 "name": "pt1", 00:10:58.367 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:58.367 "is_configured": true, 00:10:58.367 "data_offset": 2048, 00:10:58.367 "data_size": 63488 00:10:58.367 }, 00:10:58.367 { 00:10:58.367 "name": "pt2", 00:10:58.367 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:58.367 "is_configured": true, 00:10:58.367 "data_offset": 2048, 00:10:58.367 "data_size": 63488 00:10:58.367 }, 00:10:58.367 { 00:10:58.367 "name": "pt3", 00:10:58.367 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:58.367 "is_configured": true, 00:10:58.367 "data_offset": 2048, 00:10:58.367 "data_size": 63488 00:10:58.367 }, 00:10:58.367 { 00:10:58.367 "name": "pt4", 00:10:58.367 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:58.367 "is_configured": true, 00:10:58.367 "data_offset": 2048, 00:10:58.367 "data_size": 63488 00:10:58.367 } 00:10:58.367 ] 00:10:58.367 } 00:10:58.367 } 00:10:58.367 }' 00:10:58.367 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:58.626 pt2 00:10:58.626 pt3 00:10:58.626 pt4' 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.626 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:58.885 [2024-11-19 10:22:12.408315] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:58.885 10:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.885 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1dd99be1-5937-41e0-b983-64ec19451a81 '!=' 1dd99be1-5937-41e0-b983-64ec19451a81 ']' 00:10:58.885 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:58.885 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:58.885 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:58.885 10:22:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72395 00:10:58.885 10:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72395 ']' 00:10:58.885 10:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72395 00:10:58.885 10:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:58.885 10:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:58.885 10:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72395 00:10:58.885 10:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:58.885 10:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:58.885 10:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72395' 00:10:58.885 killing process with pid 72395 00:10:58.885 10:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72395 00:10:58.885 10:22:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72395 00:10:58.885 [2024-11-19 10:22:12.487869] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:58.885 [2024-11-19 10:22:12.487951] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:58.885 [2024-11-19 10:22:12.488093] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:58.885 [2024-11-19 10:22:12.488145] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:59.144 [2024-11-19 10:22:12.884806] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:00.521 ************************************ 00:11:00.521 END TEST raid_superblock_test 00:11:00.521 ************************************ 00:11:00.521 10:22:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:00.521 00:11:00.521 real 0m5.337s 00:11:00.521 user 0m7.620s 00:11:00.521 sys 0m0.859s 00:11:00.521 10:22:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.521 10:22:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.521 10:22:14 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:00.521 10:22:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:00.521 10:22:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.521 10:22:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:00.521 ************************************ 00:11:00.521 START TEST raid_read_error_test 00:11:00.521 ************************************ 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7hrgV8nSh6 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72650 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72650 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72650 ']' 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.521 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:00.521 [2024-11-19 10:22:14.153213] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:11:00.521 [2024-11-19 10:22:14.153393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72650 ] 00:11:00.781 [2024-11-19 10:22:14.326545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.781 [2024-11-19 10:22:14.441106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.041 [2024-11-19 10:22:14.644791] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.041 [2024-11-19 10:22:14.644932] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.300 10:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.300 10:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:01.300 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:01.300 10:22:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:01.300 10:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.300 10:22:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.300 BaseBdev1_malloc 00:11:01.300 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.300 10:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:01.300 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.300 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.300 true 00:11:01.300 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.300 10:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:01.300 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.300 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.300 [2024-11-19 10:22:15.030362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:01.300 [2024-11-19 10:22:15.030422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.300 [2024-11-19 10:22:15.030443] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:01.300 [2024-11-19 10:22:15.030454] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.300 [2024-11-19 10:22:15.032564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.301 [2024-11-19 10:22:15.032607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:01.301 BaseBdev1 00:11:01.301 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.301 10:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:01.301 10:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:01.301 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.301 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.301 BaseBdev2_malloc 00:11:01.301 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.301 10:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:01.301 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.301 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.561 true 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.561 [2024-11-19 10:22:15.087407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:01.561 [2024-11-19 10:22:15.087530] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.561 [2024-11-19 10:22:15.087576] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:01.561 [2024-11-19 10:22:15.087592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.561 [2024-11-19 10:22:15.089707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.561 [2024-11-19 10:22:15.089748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:01.561 BaseBdev2 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.561 BaseBdev3_malloc 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.561 true 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.561 [2024-11-19 10:22:15.169504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:01.561 [2024-11-19 10:22:15.169557] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.561 [2024-11-19 10:22:15.169575] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:01.561 [2024-11-19 10:22:15.169586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.561 [2024-11-19 10:22:15.171700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.561 [2024-11-19 10:22:15.171741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:01.561 BaseBdev3 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.561 BaseBdev4_malloc 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.561 true 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.561 [2024-11-19 10:22:15.232551] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:01.561 [2024-11-19 10:22:15.232602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.561 [2024-11-19 10:22:15.232619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:01.561 [2024-11-19 10:22:15.232629] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.561 [2024-11-19 10:22:15.234662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.561 [2024-11-19 10:22:15.234704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:01.561 BaseBdev4 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:01.561 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.562 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.562 [2024-11-19 10:22:15.240590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:01.562 [2024-11-19 10:22:15.242548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:01.562 [2024-11-19 10:22:15.242623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:01.562 [2024-11-19 10:22:15.242697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:01.562 [2024-11-19 10:22:15.242903] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:01.562 [2024-11-19 10:22:15.242918] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:01.562 [2024-11-19 10:22:15.243207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:01.562 [2024-11-19 10:22:15.243368] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:01.562 [2024-11-19 10:22:15.243379] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:01.562 [2024-11-19 10:22:15.243526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.562 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.562 10:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:01.562 10:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.562 10:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.562 10:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:01.562 10:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.562 10:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.562 10:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.562 10:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.562 10:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.562 10:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.562 10:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.562 10:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.562 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.562 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.562 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.562 10:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.562 "name": "raid_bdev1", 00:11:01.562 "uuid": "5ab57bb7-196b-47a6-8984-aa4da840e56e", 00:11:01.562 "strip_size_kb": 64, 00:11:01.562 "state": "online", 00:11:01.562 "raid_level": "concat", 00:11:01.562 "superblock": true, 00:11:01.562 "num_base_bdevs": 4, 00:11:01.562 "num_base_bdevs_discovered": 4, 00:11:01.562 "num_base_bdevs_operational": 4, 00:11:01.562 "base_bdevs_list": [ 00:11:01.562 { 00:11:01.562 "name": "BaseBdev1", 00:11:01.562 "uuid": "38e0eb52-a4fa-57a0-9d37-26ae214ae10f", 00:11:01.562 "is_configured": true, 00:11:01.562 "data_offset": 2048, 00:11:01.562 "data_size": 63488 00:11:01.562 }, 00:11:01.562 { 00:11:01.562 "name": "BaseBdev2", 00:11:01.562 "uuid": "8ef80d74-156e-5d6a-8293-d183ed5ae743", 00:11:01.562 "is_configured": true, 00:11:01.562 "data_offset": 2048, 00:11:01.562 "data_size": 63488 00:11:01.562 }, 00:11:01.562 { 00:11:01.562 "name": "BaseBdev3", 00:11:01.562 "uuid": "9b3fcc61-cd5e-5749-bd9c-ca2c1f45277c", 00:11:01.562 "is_configured": true, 00:11:01.562 "data_offset": 2048, 00:11:01.562 "data_size": 63488 00:11:01.562 }, 00:11:01.562 { 00:11:01.562 "name": "BaseBdev4", 00:11:01.562 "uuid": "987e2734-3546-5a90-93f1-8d0726491370", 00:11:01.562 "is_configured": true, 00:11:01.562 "data_offset": 2048, 00:11:01.562 "data_size": 63488 00:11:01.562 } 00:11:01.562 ] 00:11:01.562 }' 00:11:01.562 10:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.562 10:22:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.132 10:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:02.132 10:22:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:02.132 [2024-11-19 10:22:15.756910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:03.072 10:22:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:03.072 10:22:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.072 10:22:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.072 10:22:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.072 10:22:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:03.072 10:22:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:03.072 10:22:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:03.072 10:22:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:03.072 10:22:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.072 10:22:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.072 10:22:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:03.072 10:22:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.072 10:22:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.072 10:22:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.072 10:22:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.072 10:22:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.072 10:22:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.072 10:22:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.072 10:22:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.072 10:22:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.072 10:22:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.072 10:22:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.072 10:22:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.072 "name": "raid_bdev1", 00:11:03.072 "uuid": "5ab57bb7-196b-47a6-8984-aa4da840e56e", 00:11:03.072 "strip_size_kb": 64, 00:11:03.072 "state": "online", 00:11:03.072 "raid_level": "concat", 00:11:03.072 "superblock": true, 00:11:03.072 "num_base_bdevs": 4, 00:11:03.072 "num_base_bdevs_discovered": 4, 00:11:03.072 "num_base_bdevs_operational": 4, 00:11:03.072 "base_bdevs_list": [ 00:11:03.072 { 00:11:03.072 "name": "BaseBdev1", 00:11:03.072 "uuid": "38e0eb52-a4fa-57a0-9d37-26ae214ae10f", 00:11:03.072 "is_configured": true, 00:11:03.072 "data_offset": 2048, 00:11:03.072 "data_size": 63488 00:11:03.072 }, 00:11:03.072 { 00:11:03.072 "name": "BaseBdev2", 00:11:03.072 "uuid": "8ef80d74-156e-5d6a-8293-d183ed5ae743", 00:11:03.072 "is_configured": true, 00:11:03.072 "data_offset": 2048, 00:11:03.072 "data_size": 63488 00:11:03.072 }, 00:11:03.072 { 00:11:03.072 "name": "BaseBdev3", 00:11:03.072 "uuid": "9b3fcc61-cd5e-5749-bd9c-ca2c1f45277c", 00:11:03.072 "is_configured": true, 00:11:03.072 "data_offset": 2048, 00:11:03.072 "data_size": 63488 00:11:03.072 }, 00:11:03.072 { 00:11:03.072 "name": "BaseBdev4", 00:11:03.072 "uuid": "987e2734-3546-5a90-93f1-8d0726491370", 00:11:03.072 "is_configured": true, 00:11:03.072 "data_offset": 2048, 00:11:03.072 "data_size": 63488 00:11:03.073 } 00:11:03.073 ] 00:11:03.073 }' 00:11:03.073 10:22:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.073 10:22:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.642 10:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:03.642 10:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.642 10:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.642 [2024-11-19 10:22:17.124753] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:03.642 [2024-11-19 10:22:17.124787] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:03.642 [2024-11-19 10:22:17.127589] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:03.642 [2024-11-19 10:22:17.127717] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.642 [2024-11-19 10:22:17.127775] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:03.642 [2024-11-19 10:22:17.127793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:03.642 { 00:11:03.642 "results": [ 00:11:03.642 { 00:11:03.642 "job": "raid_bdev1", 00:11:03.642 "core_mask": "0x1", 00:11:03.642 "workload": "randrw", 00:11:03.642 "percentage": 50, 00:11:03.642 "status": "finished", 00:11:03.642 "queue_depth": 1, 00:11:03.642 "io_size": 131072, 00:11:03.642 "runtime": 1.368662, 00:11:03.642 "iops": 15854.169984992643, 00:11:03.642 "mibps": 1981.7712481240803, 00:11:03.642 "io_failed": 1, 00:11:03.642 "io_timeout": 0, 00:11:03.642 "avg_latency_us": 87.68738228724366, 00:11:03.642 "min_latency_us": 26.829694323144103, 00:11:03.642 "max_latency_us": 1438.071615720524 00:11:03.642 } 00:11:03.642 ], 00:11:03.642 "core_count": 1 00:11:03.642 } 00:11:03.642 10:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.642 10:22:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72650 00:11:03.642 10:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72650 ']' 00:11:03.642 10:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72650 00:11:03.642 10:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:03.642 10:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:03.642 10:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72650 00:11:03.642 10:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:03.642 10:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:03.642 10:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72650' 00:11:03.642 killing process with pid 72650 00:11:03.642 10:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72650 00:11:03.642 10:22:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72650 00:11:03.642 [2024-11-19 10:22:17.160801] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:03.901 [2024-11-19 10:22:17.489844] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:05.279 10:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:05.279 10:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7hrgV8nSh6 00:11:05.279 10:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:05.279 ************************************ 00:11:05.279 END TEST raid_read_error_test 00:11:05.279 ************************************ 00:11:05.279 10:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:05.279 10:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:05.279 10:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:05.279 10:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:05.279 10:22:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:05.279 00:11:05.279 real 0m4.611s 00:11:05.279 user 0m5.371s 00:11:05.279 sys 0m0.582s 00:11:05.279 10:22:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.279 10:22:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.279 10:22:18 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:05.279 10:22:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:05.279 10:22:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.279 10:22:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:05.279 ************************************ 00:11:05.279 START TEST raid_write_error_test 00:11:05.279 ************************************ 00:11:05.279 10:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:11:05.279 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.OtHfSMN0kX 00:11:05.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72802 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72802 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 72802 ']' 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.280 10:22:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:05.280 [2024-11-19 10:22:18.823453] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:11:05.280 [2024-11-19 10:22:18.823588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72802 ] 00:11:05.280 [2024-11-19 10:22:18.994379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.547 [2024-11-19 10:22:19.109315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.547 [2024-11-19 10:22:19.304518] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:05.547 [2024-11-19 10:22:19.304578] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.121 BaseBdev1_malloc 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.121 true 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.121 [2024-11-19 10:22:19.701469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:06.121 [2024-11-19 10:22:19.701584] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.121 [2024-11-19 10:22:19.701609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:06.121 [2024-11-19 10:22:19.701620] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.121 [2024-11-19 10:22:19.703856] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.121 [2024-11-19 10:22:19.703900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:06.121 BaseBdev1 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.121 BaseBdev2_malloc 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.121 true 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.121 [2024-11-19 10:22:19.764885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:06.121 [2024-11-19 10:22:19.765009] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.121 [2024-11-19 10:22:19.765032] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:06.121 [2024-11-19 10:22:19.765043] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.121 [2024-11-19 10:22:19.767123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.121 [2024-11-19 10:22:19.767167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:06.121 BaseBdev2 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.121 BaseBdev3_malloc 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.121 true 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.121 [2024-11-19 10:22:19.834116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:06.121 [2024-11-19 10:22:19.834166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.121 [2024-11-19 10:22:19.834183] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:06.121 [2024-11-19 10:22:19.834193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.121 [2024-11-19 10:22:19.836229] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.121 [2024-11-19 10:22:19.836323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:06.121 BaseBdev3 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.121 BaseBdev4_malloc 00:11:06.121 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.122 10:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:06.122 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.122 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.122 true 00:11:06.122 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.122 10:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:06.122 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.122 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.122 [2024-11-19 10:22:19.898968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:06.122 [2024-11-19 10:22:19.899024] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.122 [2024-11-19 10:22:19.899056] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:06.122 [2024-11-19 10:22:19.899067] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.382 [2024-11-19 10:22:19.901006] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.382 [2024-11-19 10:22:19.901043] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:06.382 BaseBdev4 00:11:06.382 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.382 10:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:06.382 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.382 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.382 [2024-11-19 10:22:19.911017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:06.382 [2024-11-19 10:22:19.912736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:06.382 [2024-11-19 10:22:19.912855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:06.382 [2024-11-19 10:22:19.912923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:06.382 [2024-11-19 10:22:19.913161] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:06.382 [2024-11-19 10:22:19.913175] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:06.382 [2024-11-19 10:22:19.913399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:06.382 [2024-11-19 10:22:19.913541] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:06.382 [2024-11-19 10:22:19.913552] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:06.382 [2024-11-19 10:22:19.913696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.382 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.382 10:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:06.382 10:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.382 10:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.382 10:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.382 10:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.382 10:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.382 10:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.382 10:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.382 10:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.382 10:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.382 10:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.382 10:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.382 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.382 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.382 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.382 10:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.382 "name": "raid_bdev1", 00:11:06.382 "uuid": "ff1f39d0-a5ac-46ef-9b0b-dd02cf1b41b0", 00:11:06.382 "strip_size_kb": 64, 00:11:06.382 "state": "online", 00:11:06.382 "raid_level": "concat", 00:11:06.382 "superblock": true, 00:11:06.382 "num_base_bdevs": 4, 00:11:06.382 "num_base_bdevs_discovered": 4, 00:11:06.382 "num_base_bdevs_operational": 4, 00:11:06.382 "base_bdevs_list": [ 00:11:06.382 { 00:11:06.382 "name": "BaseBdev1", 00:11:06.382 "uuid": "c860d045-0114-5116-8411-14456e645fa3", 00:11:06.382 "is_configured": true, 00:11:06.382 "data_offset": 2048, 00:11:06.382 "data_size": 63488 00:11:06.382 }, 00:11:06.382 { 00:11:06.382 "name": "BaseBdev2", 00:11:06.382 "uuid": "ce43f1fe-e9ec-53b5-a14e-f30e082fa0d4", 00:11:06.382 "is_configured": true, 00:11:06.382 "data_offset": 2048, 00:11:06.382 "data_size": 63488 00:11:06.382 }, 00:11:06.382 { 00:11:06.382 "name": "BaseBdev3", 00:11:06.382 "uuid": "b191220c-4846-50e5-b9da-3b886187ecbb", 00:11:06.382 "is_configured": true, 00:11:06.382 "data_offset": 2048, 00:11:06.382 "data_size": 63488 00:11:06.382 }, 00:11:06.382 { 00:11:06.382 "name": "BaseBdev4", 00:11:06.382 "uuid": "74b18999-8d1f-5564-8e2b-add130af2cb8", 00:11:06.382 "is_configured": true, 00:11:06.382 "data_offset": 2048, 00:11:06.382 "data_size": 63488 00:11:06.382 } 00:11:06.382 ] 00:11:06.382 }' 00:11:06.382 10:22:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.382 10:22:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.642 10:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:06.642 10:22:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:06.642 [2024-11-19 10:22:20.411425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:07.581 10:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:07.581 10:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.581 10:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.581 10:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.581 10:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:07.581 10:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:07.581 10:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:07.581 10:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:07.581 10:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.582 10:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.582 10:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.582 10:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.582 10:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.582 10:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.582 10:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.582 10:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.582 10:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.582 10:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.582 10:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.582 10:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.582 10:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.582 10:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.841 10:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.841 "name": "raid_bdev1", 00:11:07.841 "uuid": "ff1f39d0-a5ac-46ef-9b0b-dd02cf1b41b0", 00:11:07.841 "strip_size_kb": 64, 00:11:07.841 "state": "online", 00:11:07.841 "raid_level": "concat", 00:11:07.841 "superblock": true, 00:11:07.841 "num_base_bdevs": 4, 00:11:07.841 "num_base_bdevs_discovered": 4, 00:11:07.841 "num_base_bdevs_operational": 4, 00:11:07.841 "base_bdevs_list": [ 00:11:07.841 { 00:11:07.841 "name": "BaseBdev1", 00:11:07.841 "uuid": "c860d045-0114-5116-8411-14456e645fa3", 00:11:07.841 "is_configured": true, 00:11:07.841 "data_offset": 2048, 00:11:07.841 "data_size": 63488 00:11:07.841 }, 00:11:07.841 { 00:11:07.841 "name": "BaseBdev2", 00:11:07.842 "uuid": "ce43f1fe-e9ec-53b5-a14e-f30e082fa0d4", 00:11:07.842 "is_configured": true, 00:11:07.842 "data_offset": 2048, 00:11:07.842 "data_size": 63488 00:11:07.842 }, 00:11:07.842 { 00:11:07.842 "name": "BaseBdev3", 00:11:07.842 "uuid": "b191220c-4846-50e5-b9da-3b886187ecbb", 00:11:07.842 "is_configured": true, 00:11:07.842 "data_offset": 2048, 00:11:07.842 "data_size": 63488 00:11:07.842 }, 00:11:07.842 { 00:11:07.842 "name": "BaseBdev4", 00:11:07.842 "uuid": "74b18999-8d1f-5564-8e2b-add130af2cb8", 00:11:07.842 "is_configured": true, 00:11:07.842 "data_offset": 2048, 00:11:07.842 "data_size": 63488 00:11:07.842 } 00:11:07.842 ] 00:11:07.842 }' 00:11:07.842 10:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.842 10:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.101 10:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:08.101 10:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.101 10:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.101 [2024-11-19 10:22:21.799138] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:08.101 [2024-11-19 10:22:21.799242] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:08.101 [2024-11-19 10:22:21.801804] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:08.101 [2024-11-19 10:22:21.801903] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.101 [2024-11-19 10:22:21.801964] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:08.101 [2024-11-19 10:22:21.802028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:08.101 { 00:11:08.101 "results": [ 00:11:08.101 { 00:11:08.101 "job": "raid_bdev1", 00:11:08.101 "core_mask": "0x1", 00:11:08.101 "workload": "randrw", 00:11:08.101 "percentage": 50, 00:11:08.101 "status": "finished", 00:11:08.101 "queue_depth": 1, 00:11:08.101 "io_size": 131072, 00:11:08.101 "runtime": 1.388733, 00:11:08.101 "iops": 16148.532511289068, 00:11:08.101 "mibps": 2018.5665639111335, 00:11:08.101 "io_failed": 1, 00:11:08.101 "io_timeout": 0, 00:11:08.101 "avg_latency_us": 86.2399203393134, 00:11:08.101 "min_latency_us": 25.152838427947597, 00:11:08.101 "max_latency_us": 1359.3711790393013 00:11:08.101 } 00:11:08.101 ], 00:11:08.101 "core_count": 1 00:11:08.101 } 00:11:08.101 10:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.101 10:22:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72802 00:11:08.101 10:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 72802 ']' 00:11:08.101 10:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 72802 00:11:08.101 10:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:08.101 10:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:08.101 10:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72802 00:11:08.101 killing process with pid 72802 00:11:08.101 10:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:08.101 10:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:08.101 10:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72802' 00:11:08.102 10:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 72802 00:11:08.102 [2024-11-19 10:22:21.846731] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:08.102 10:22:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 72802 00:11:08.671 [2024-11-19 10:22:22.160022] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:09.610 10:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.OtHfSMN0kX 00:11:09.610 10:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:09.610 10:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:09.610 ************************************ 00:11:09.610 END TEST raid_write_error_test 00:11:09.610 ************************************ 00:11:09.610 10:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:09.610 10:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:09.610 10:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:09.610 10:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:09.610 10:22:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:09.610 00:11:09.610 real 0m4.577s 00:11:09.610 user 0m5.395s 00:11:09.610 sys 0m0.564s 00:11:09.610 10:22:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:09.610 10:22:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.610 10:22:23 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:09.610 10:22:23 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:09.610 10:22:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:09.610 10:22:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.610 10:22:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:09.610 ************************************ 00:11:09.611 START TEST raid_state_function_test 00:11:09.611 ************************************ 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:09.611 Process raid pid: 72942 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72942 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72942' 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72942 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 72942 ']' 00:11:09.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:09.611 10:22:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.871 [2024-11-19 10:22:23.465906] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:11:09.871 [2024-11-19 10:22:23.466055] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.871 [2024-11-19 10:22:23.639600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.130 [2024-11-19 10:22:23.754236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.395 [2024-11-19 10:22:23.948536] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.395 [2024-11-19 10:22:23.948578] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.663 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:10.663 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:10.663 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:10.663 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.663 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.663 [2024-11-19 10:22:24.291879] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:10.663 [2024-11-19 10:22:24.291938] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:10.663 [2024-11-19 10:22:24.291948] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:10.663 [2024-11-19 10:22:24.291974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:10.663 [2024-11-19 10:22:24.291980] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:10.663 [2024-11-19 10:22:24.291989] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:10.663 [2024-11-19 10:22:24.291995] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:10.663 [2024-11-19 10:22:24.292004] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:10.663 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.663 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:10.663 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.663 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.663 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:10.663 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:10.663 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.663 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.663 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.663 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.663 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.663 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.663 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.663 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.663 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.663 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.663 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.663 "name": "Existed_Raid", 00:11:10.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.663 "strip_size_kb": 0, 00:11:10.663 "state": "configuring", 00:11:10.663 "raid_level": "raid1", 00:11:10.663 "superblock": false, 00:11:10.663 "num_base_bdevs": 4, 00:11:10.663 "num_base_bdevs_discovered": 0, 00:11:10.663 "num_base_bdevs_operational": 4, 00:11:10.663 "base_bdevs_list": [ 00:11:10.663 { 00:11:10.663 "name": "BaseBdev1", 00:11:10.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.663 "is_configured": false, 00:11:10.663 "data_offset": 0, 00:11:10.663 "data_size": 0 00:11:10.663 }, 00:11:10.663 { 00:11:10.663 "name": "BaseBdev2", 00:11:10.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.663 "is_configured": false, 00:11:10.663 "data_offset": 0, 00:11:10.663 "data_size": 0 00:11:10.663 }, 00:11:10.663 { 00:11:10.663 "name": "BaseBdev3", 00:11:10.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.663 "is_configured": false, 00:11:10.663 "data_offset": 0, 00:11:10.663 "data_size": 0 00:11:10.663 }, 00:11:10.663 { 00:11:10.663 "name": "BaseBdev4", 00:11:10.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.663 "is_configured": false, 00:11:10.663 "data_offset": 0, 00:11:10.663 "data_size": 0 00:11:10.663 } 00:11:10.663 ] 00:11:10.663 }' 00:11:10.663 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.663 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.234 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:11.234 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.234 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.234 [2024-11-19 10:22:24.751104] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:11.234 [2024-11-19 10:22:24.751216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:11.234 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.234 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:11.234 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.234 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.234 [2024-11-19 10:22:24.763086] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:11.234 [2024-11-19 10:22:24.763177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:11.234 [2024-11-19 10:22:24.763210] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:11.234 [2024-11-19 10:22:24.763234] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:11.234 [2024-11-19 10:22:24.763273] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:11.234 [2024-11-19 10:22:24.763296] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:11.234 [2024-11-19 10:22:24.763341] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:11.234 [2024-11-19 10:22:24.763372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:11.234 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.234 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:11.234 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.234 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.234 [2024-11-19 10:22:24.810605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:11.234 BaseBdev1 00:11:11.234 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.234 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:11.234 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:11.234 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:11.234 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:11.234 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:11.234 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:11.234 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:11.234 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.234 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.234 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.234 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:11.235 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.235 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.235 [ 00:11:11.235 { 00:11:11.235 "name": "BaseBdev1", 00:11:11.235 "aliases": [ 00:11:11.235 "4b24c5e5-a680-437b-bc92-baf5367dab46" 00:11:11.235 ], 00:11:11.235 "product_name": "Malloc disk", 00:11:11.235 "block_size": 512, 00:11:11.235 "num_blocks": 65536, 00:11:11.235 "uuid": "4b24c5e5-a680-437b-bc92-baf5367dab46", 00:11:11.235 "assigned_rate_limits": { 00:11:11.235 "rw_ios_per_sec": 0, 00:11:11.235 "rw_mbytes_per_sec": 0, 00:11:11.235 "r_mbytes_per_sec": 0, 00:11:11.235 "w_mbytes_per_sec": 0 00:11:11.235 }, 00:11:11.235 "claimed": true, 00:11:11.235 "claim_type": "exclusive_write", 00:11:11.235 "zoned": false, 00:11:11.235 "supported_io_types": { 00:11:11.235 "read": true, 00:11:11.235 "write": true, 00:11:11.235 "unmap": true, 00:11:11.235 "flush": true, 00:11:11.235 "reset": true, 00:11:11.235 "nvme_admin": false, 00:11:11.235 "nvme_io": false, 00:11:11.235 "nvme_io_md": false, 00:11:11.235 "write_zeroes": true, 00:11:11.235 "zcopy": true, 00:11:11.235 "get_zone_info": false, 00:11:11.235 "zone_management": false, 00:11:11.235 "zone_append": false, 00:11:11.235 "compare": false, 00:11:11.235 "compare_and_write": false, 00:11:11.235 "abort": true, 00:11:11.235 "seek_hole": false, 00:11:11.235 "seek_data": false, 00:11:11.235 "copy": true, 00:11:11.235 "nvme_iov_md": false 00:11:11.235 }, 00:11:11.235 "memory_domains": [ 00:11:11.235 { 00:11:11.235 "dma_device_id": "system", 00:11:11.235 "dma_device_type": 1 00:11:11.235 }, 00:11:11.235 { 00:11:11.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.235 "dma_device_type": 2 00:11:11.235 } 00:11:11.235 ], 00:11:11.235 "driver_specific": {} 00:11:11.235 } 00:11:11.235 ] 00:11:11.235 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.235 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:11.235 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:11.235 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.235 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.235 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:11.235 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:11.235 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.235 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.235 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.235 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.235 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.235 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.235 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.235 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.235 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.235 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.235 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.235 "name": "Existed_Raid", 00:11:11.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.235 "strip_size_kb": 0, 00:11:11.235 "state": "configuring", 00:11:11.235 "raid_level": "raid1", 00:11:11.235 "superblock": false, 00:11:11.235 "num_base_bdevs": 4, 00:11:11.235 "num_base_bdevs_discovered": 1, 00:11:11.235 "num_base_bdevs_operational": 4, 00:11:11.235 "base_bdevs_list": [ 00:11:11.235 { 00:11:11.235 "name": "BaseBdev1", 00:11:11.235 "uuid": "4b24c5e5-a680-437b-bc92-baf5367dab46", 00:11:11.235 "is_configured": true, 00:11:11.235 "data_offset": 0, 00:11:11.235 "data_size": 65536 00:11:11.235 }, 00:11:11.235 { 00:11:11.235 "name": "BaseBdev2", 00:11:11.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.235 "is_configured": false, 00:11:11.235 "data_offset": 0, 00:11:11.235 "data_size": 0 00:11:11.235 }, 00:11:11.235 { 00:11:11.235 "name": "BaseBdev3", 00:11:11.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.235 "is_configured": false, 00:11:11.235 "data_offset": 0, 00:11:11.235 "data_size": 0 00:11:11.235 }, 00:11:11.235 { 00:11:11.235 "name": "BaseBdev4", 00:11:11.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.235 "is_configured": false, 00:11:11.235 "data_offset": 0, 00:11:11.235 "data_size": 0 00:11:11.235 } 00:11:11.235 ] 00:11:11.235 }' 00:11:11.235 10:22:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.235 10:22:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.806 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:11.806 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.806 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.806 [2024-11-19 10:22:25.301826] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:11.806 [2024-11-19 10:22:25.301881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:11.806 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.806 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:11.806 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.806 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.806 [2024-11-19 10:22:25.309856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:11.806 [2024-11-19 10:22:25.311806] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:11.806 [2024-11-19 10:22:25.311908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:11.806 [2024-11-19 10:22:25.311941] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:11.806 [2024-11-19 10:22:25.311967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:11.806 [2024-11-19 10:22:25.311986] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:11.806 [2024-11-19 10:22:25.312017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:11.806 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.806 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:11.806 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:11.806 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:11.806 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.806 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.806 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:11.806 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:11.806 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.806 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.806 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.806 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.806 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.806 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.806 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.806 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.806 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.806 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.806 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.807 "name": "Existed_Raid", 00:11:11.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.807 "strip_size_kb": 0, 00:11:11.807 "state": "configuring", 00:11:11.807 "raid_level": "raid1", 00:11:11.807 "superblock": false, 00:11:11.807 "num_base_bdevs": 4, 00:11:11.807 "num_base_bdevs_discovered": 1, 00:11:11.807 "num_base_bdevs_operational": 4, 00:11:11.807 "base_bdevs_list": [ 00:11:11.807 { 00:11:11.807 "name": "BaseBdev1", 00:11:11.807 "uuid": "4b24c5e5-a680-437b-bc92-baf5367dab46", 00:11:11.807 "is_configured": true, 00:11:11.807 "data_offset": 0, 00:11:11.807 "data_size": 65536 00:11:11.807 }, 00:11:11.807 { 00:11:11.807 "name": "BaseBdev2", 00:11:11.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.807 "is_configured": false, 00:11:11.807 "data_offset": 0, 00:11:11.807 "data_size": 0 00:11:11.807 }, 00:11:11.807 { 00:11:11.807 "name": "BaseBdev3", 00:11:11.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.807 "is_configured": false, 00:11:11.807 "data_offset": 0, 00:11:11.807 "data_size": 0 00:11:11.807 }, 00:11:11.807 { 00:11:11.807 "name": "BaseBdev4", 00:11:11.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.807 "is_configured": false, 00:11:11.807 "data_offset": 0, 00:11:11.807 "data_size": 0 00:11:11.807 } 00:11:11.807 ] 00:11:11.807 }' 00:11:11.807 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.807 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.067 [2024-11-19 10:22:25.745501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.067 BaseBdev2 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.067 [ 00:11:12.067 { 00:11:12.067 "name": "BaseBdev2", 00:11:12.067 "aliases": [ 00:11:12.067 "d2fc3672-6e20-48bc-9329-28ce63ff134b" 00:11:12.067 ], 00:11:12.067 "product_name": "Malloc disk", 00:11:12.067 "block_size": 512, 00:11:12.067 "num_blocks": 65536, 00:11:12.067 "uuid": "d2fc3672-6e20-48bc-9329-28ce63ff134b", 00:11:12.067 "assigned_rate_limits": { 00:11:12.067 "rw_ios_per_sec": 0, 00:11:12.067 "rw_mbytes_per_sec": 0, 00:11:12.067 "r_mbytes_per_sec": 0, 00:11:12.067 "w_mbytes_per_sec": 0 00:11:12.067 }, 00:11:12.067 "claimed": true, 00:11:12.067 "claim_type": "exclusive_write", 00:11:12.067 "zoned": false, 00:11:12.067 "supported_io_types": { 00:11:12.067 "read": true, 00:11:12.067 "write": true, 00:11:12.067 "unmap": true, 00:11:12.067 "flush": true, 00:11:12.067 "reset": true, 00:11:12.067 "nvme_admin": false, 00:11:12.067 "nvme_io": false, 00:11:12.067 "nvme_io_md": false, 00:11:12.067 "write_zeroes": true, 00:11:12.067 "zcopy": true, 00:11:12.067 "get_zone_info": false, 00:11:12.067 "zone_management": false, 00:11:12.067 "zone_append": false, 00:11:12.067 "compare": false, 00:11:12.067 "compare_and_write": false, 00:11:12.067 "abort": true, 00:11:12.067 "seek_hole": false, 00:11:12.067 "seek_data": false, 00:11:12.067 "copy": true, 00:11:12.067 "nvme_iov_md": false 00:11:12.067 }, 00:11:12.067 "memory_domains": [ 00:11:12.067 { 00:11:12.067 "dma_device_id": "system", 00:11:12.067 "dma_device_type": 1 00:11:12.067 }, 00:11:12.067 { 00:11:12.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.067 "dma_device_type": 2 00:11:12.067 } 00:11:12.067 ], 00:11:12.067 "driver_specific": {} 00:11:12.067 } 00:11:12.067 ] 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.067 "name": "Existed_Raid", 00:11:12.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.067 "strip_size_kb": 0, 00:11:12.067 "state": "configuring", 00:11:12.067 "raid_level": "raid1", 00:11:12.067 "superblock": false, 00:11:12.067 "num_base_bdevs": 4, 00:11:12.067 "num_base_bdevs_discovered": 2, 00:11:12.067 "num_base_bdevs_operational": 4, 00:11:12.067 "base_bdevs_list": [ 00:11:12.067 { 00:11:12.067 "name": "BaseBdev1", 00:11:12.067 "uuid": "4b24c5e5-a680-437b-bc92-baf5367dab46", 00:11:12.067 "is_configured": true, 00:11:12.067 "data_offset": 0, 00:11:12.067 "data_size": 65536 00:11:12.067 }, 00:11:12.067 { 00:11:12.067 "name": "BaseBdev2", 00:11:12.067 "uuid": "d2fc3672-6e20-48bc-9329-28ce63ff134b", 00:11:12.067 "is_configured": true, 00:11:12.067 "data_offset": 0, 00:11:12.067 "data_size": 65536 00:11:12.067 }, 00:11:12.067 { 00:11:12.067 "name": "BaseBdev3", 00:11:12.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.067 "is_configured": false, 00:11:12.067 "data_offset": 0, 00:11:12.067 "data_size": 0 00:11:12.067 }, 00:11:12.067 { 00:11:12.067 "name": "BaseBdev4", 00:11:12.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.067 "is_configured": false, 00:11:12.067 "data_offset": 0, 00:11:12.067 "data_size": 0 00:11:12.067 } 00:11:12.067 ] 00:11:12.067 }' 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.067 10:22:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.638 [2024-11-19 10:22:26.256919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:12.638 BaseBdev3 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.638 [ 00:11:12.638 { 00:11:12.638 "name": "BaseBdev3", 00:11:12.638 "aliases": [ 00:11:12.638 "53df3fcb-d95a-4d86-87a1-8186946629f7" 00:11:12.638 ], 00:11:12.638 "product_name": "Malloc disk", 00:11:12.638 "block_size": 512, 00:11:12.638 "num_blocks": 65536, 00:11:12.638 "uuid": "53df3fcb-d95a-4d86-87a1-8186946629f7", 00:11:12.638 "assigned_rate_limits": { 00:11:12.638 "rw_ios_per_sec": 0, 00:11:12.638 "rw_mbytes_per_sec": 0, 00:11:12.638 "r_mbytes_per_sec": 0, 00:11:12.638 "w_mbytes_per_sec": 0 00:11:12.638 }, 00:11:12.638 "claimed": true, 00:11:12.638 "claim_type": "exclusive_write", 00:11:12.638 "zoned": false, 00:11:12.638 "supported_io_types": { 00:11:12.638 "read": true, 00:11:12.638 "write": true, 00:11:12.638 "unmap": true, 00:11:12.638 "flush": true, 00:11:12.638 "reset": true, 00:11:12.638 "nvme_admin": false, 00:11:12.638 "nvme_io": false, 00:11:12.638 "nvme_io_md": false, 00:11:12.638 "write_zeroes": true, 00:11:12.638 "zcopy": true, 00:11:12.638 "get_zone_info": false, 00:11:12.638 "zone_management": false, 00:11:12.638 "zone_append": false, 00:11:12.638 "compare": false, 00:11:12.638 "compare_and_write": false, 00:11:12.638 "abort": true, 00:11:12.638 "seek_hole": false, 00:11:12.638 "seek_data": false, 00:11:12.638 "copy": true, 00:11:12.638 "nvme_iov_md": false 00:11:12.638 }, 00:11:12.638 "memory_domains": [ 00:11:12.638 { 00:11:12.638 "dma_device_id": "system", 00:11:12.638 "dma_device_type": 1 00:11:12.638 }, 00:11:12.638 { 00:11:12.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.638 "dma_device_type": 2 00:11:12.638 } 00:11:12.638 ], 00:11:12.638 "driver_specific": {} 00:11:12.638 } 00:11:12.638 ] 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.638 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.638 "name": "Existed_Raid", 00:11:12.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.638 "strip_size_kb": 0, 00:11:12.638 "state": "configuring", 00:11:12.638 "raid_level": "raid1", 00:11:12.638 "superblock": false, 00:11:12.638 "num_base_bdevs": 4, 00:11:12.638 "num_base_bdevs_discovered": 3, 00:11:12.638 "num_base_bdevs_operational": 4, 00:11:12.638 "base_bdevs_list": [ 00:11:12.638 { 00:11:12.638 "name": "BaseBdev1", 00:11:12.638 "uuid": "4b24c5e5-a680-437b-bc92-baf5367dab46", 00:11:12.638 "is_configured": true, 00:11:12.638 "data_offset": 0, 00:11:12.638 "data_size": 65536 00:11:12.638 }, 00:11:12.638 { 00:11:12.638 "name": "BaseBdev2", 00:11:12.638 "uuid": "d2fc3672-6e20-48bc-9329-28ce63ff134b", 00:11:12.638 "is_configured": true, 00:11:12.638 "data_offset": 0, 00:11:12.638 "data_size": 65536 00:11:12.638 }, 00:11:12.638 { 00:11:12.638 "name": "BaseBdev3", 00:11:12.638 "uuid": "53df3fcb-d95a-4d86-87a1-8186946629f7", 00:11:12.639 "is_configured": true, 00:11:12.639 "data_offset": 0, 00:11:12.639 "data_size": 65536 00:11:12.639 }, 00:11:12.639 { 00:11:12.639 "name": "BaseBdev4", 00:11:12.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.639 "is_configured": false, 00:11:12.639 "data_offset": 0, 00:11:12.639 "data_size": 0 00:11:12.639 } 00:11:12.639 ] 00:11:12.639 }' 00:11:12.639 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.639 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.209 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:13.209 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.209 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.209 [2024-11-19 10:22:26.792790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:13.209 [2024-11-19 10:22:26.792902] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:13.209 [2024-11-19 10:22:26.792916] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:13.209 [2024-11-19 10:22:26.793206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:13.209 [2024-11-19 10:22:26.793375] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:13.209 [2024-11-19 10:22:26.793389] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:13.209 [2024-11-19 10:22:26.793631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.209 BaseBdev4 00:11:13.209 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.209 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:13.209 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:13.209 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:13.209 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:13.209 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:13.209 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:13.209 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:13.209 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.209 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.209 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.209 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:13.209 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.209 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.209 [ 00:11:13.209 { 00:11:13.209 "name": "BaseBdev4", 00:11:13.209 "aliases": [ 00:11:13.209 "dba662ce-ee33-43c1-ae0a-a5ab17d5320a" 00:11:13.209 ], 00:11:13.209 "product_name": "Malloc disk", 00:11:13.209 "block_size": 512, 00:11:13.209 "num_blocks": 65536, 00:11:13.209 "uuid": "dba662ce-ee33-43c1-ae0a-a5ab17d5320a", 00:11:13.209 "assigned_rate_limits": { 00:11:13.209 "rw_ios_per_sec": 0, 00:11:13.209 "rw_mbytes_per_sec": 0, 00:11:13.209 "r_mbytes_per_sec": 0, 00:11:13.209 "w_mbytes_per_sec": 0 00:11:13.209 }, 00:11:13.209 "claimed": true, 00:11:13.209 "claim_type": "exclusive_write", 00:11:13.209 "zoned": false, 00:11:13.209 "supported_io_types": { 00:11:13.209 "read": true, 00:11:13.209 "write": true, 00:11:13.209 "unmap": true, 00:11:13.209 "flush": true, 00:11:13.209 "reset": true, 00:11:13.209 "nvme_admin": false, 00:11:13.209 "nvme_io": false, 00:11:13.209 "nvme_io_md": false, 00:11:13.209 "write_zeroes": true, 00:11:13.209 "zcopy": true, 00:11:13.209 "get_zone_info": false, 00:11:13.209 "zone_management": false, 00:11:13.209 "zone_append": false, 00:11:13.209 "compare": false, 00:11:13.209 "compare_and_write": false, 00:11:13.209 "abort": true, 00:11:13.209 "seek_hole": false, 00:11:13.209 "seek_data": false, 00:11:13.209 "copy": true, 00:11:13.209 "nvme_iov_md": false 00:11:13.209 }, 00:11:13.209 "memory_domains": [ 00:11:13.209 { 00:11:13.209 "dma_device_id": "system", 00:11:13.209 "dma_device_type": 1 00:11:13.209 }, 00:11:13.209 { 00:11:13.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.209 "dma_device_type": 2 00:11:13.209 } 00:11:13.209 ], 00:11:13.209 "driver_specific": {} 00:11:13.209 } 00:11:13.209 ] 00:11:13.209 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.209 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:13.209 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:13.209 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:13.209 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:13.209 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.209 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.209 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.209 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.209 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.209 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.209 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.209 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.210 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.210 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.210 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.210 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.210 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.210 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.210 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.210 "name": "Existed_Raid", 00:11:13.210 "uuid": "bd455661-46fb-4259-8f46-5f322932f30b", 00:11:13.210 "strip_size_kb": 0, 00:11:13.210 "state": "online", 00:11:13.210 "raid_level": "raid1", 00:11:13.210 "superblock": false, 00:11:13.210 "num_base_bdevs": 4, 00:11:13.210 "num_base_bdevs_discovered": 4, 00:11:13.210 "num_base_bdevs_operational": 4, 00:11:13.210 "base_bdevs_list": [ 00:11:13.210 { 00:11:13.210 "name": "BaseBdev1", 00:11:13.210 "uuid": "4b24c5e5-a680-437b-bc92-baf5367dab46", 00:11:13.210 "is_configured": true, 00:11:13.210 "data_offset": 0, 00:11:13.210 "data_size": 65536 00:11:13.210 }, 00:11:13.210 { 00:11:13.210 "name": "BaseBdev2", 00:11:13.210 "uuid": "d2fc3672-6e20-48bc-9329-28ce63ff134b", 00:11:13.210 "is_configured": true, 00:11:13.210 "data_offset": 0, 00:11:13.210 "data_size": 65536 00:11:13.210 }, 00:11:13.210 { 00:11:13.210 "name": "BaseBdev3", 00:11:13.210 "uuid": "53df3fcb-d95a-4d86-87a1-8186946629f7", 00:11:13.210 "is_configured": true, 00:11:13.210 "data_offset": 0, 00:11:13.210 "data_size": 65536 00:11:13.210 }, 00:11:13.210 { 00:11:13.210 "name": "BaseBdev4", 00:11:13.210 "uuid": "dba662ce-ee33-43c1-ae0a-a5ab17d5320a", 00:11:13.210 "is_configured": true, 00:11:13.210 "data_offset": 0, 00:11:13.210 "data_size": 65536 00:11:13.210 } 00:11:13.210 ] 00:11:13.210 }' 00:11:13.210 10:22:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.210 10:22:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.470 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:13.470 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:13.470 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:13.470 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:13.470 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:13.470 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:13.470 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:13.470 10:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.470 10:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.470 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:13.470 [2024-11-19 10:22:27.228440] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:13.470 10:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.730 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:13.730 "name": "Existed_Raid", 00:11:13.730 "aliases": [ 00:11:13.730 "bd455661-46fb-4259-8f46-5f322932f30b" 00:11:13.730 ], 00:11:13.730 "product_name": "Raid Volume", 00:11:13.730 "block_size": 512, 00:11:13.730 "num_blocks": 65536, 00:11:13.730 "uuid": "bd455661-46fb-4259-8f46-5f322932f30b", 00:11:13.730 "assigned_rate_limits": { 00:11:13.730 "rw_ios_per_sec": 0, 00:11:13.730 "rw_mbytes_per_sec": 0, 00:11:13.730 "r_mbytes_per_sec": 0, 00:11:13.730 "w_mbytes_per_sec": 0 00:11:13.730 }, 00:11:13.730 "claimed": false, 00:11:13.730 "zoned": false, 00:11:13.730 "supported_io_types": { 00:11:13.730 "read": true, 00:11:13.730 "write": true, 00:11:13.730 "unmap": false, 00:11:13.730 "flush": false, 00:11:13.730 "reset": true, 00:11:13.730 "nvme_admin": false, 00:11:13.730 "nvme_io": false, 00:11:13.730 "nvme_io_md": false, 00:11:13.730 "write_zeroes": true, 00:11:13.730 "zcopy": false, 00:11:13.730 "get_zone_info": false, 00:11:13.730 "zone_management": false, 00:11:13.730 "zone_append": false, 00:11:13.730 "compare": false, 00:11:13.730 "compare_and_write": false, 00:11:13.730 "abort": false, 00:11:13.730 "seek_hole": false, 00:11:13.730 "seek_data": false, 00:11:13.730 "copy": false, 00:11:13.730 "nvme_iov_md": false 00:11:13.730 }, 00:11:13.730 "memory_domains": [ 00:11:13.730 { 00:11:13.730 "dma_device_id": "system", 00:11:13.730 "dma_device_type": 1 00:11:13.730 }, 00:11:13.730 { 00:11:13.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.730 "dma_device_type": 2 00:11:13.730 }, 00:11:13.730 { 00:11:13.730 "dma_device_id": "system", 00:11:13.730 "dma_device_type": 1 00:11:13.730 }, 00:11:13.730 { 00:11:13.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.730 "dma_device_type": 2 00:11:13.730 }, 00:11:13.730 { 00:11:13.730 "dma_device_id": "system", 00:11:13.730 "dma_device_type": 1 00:11:13.730 }, 00:11:13.730 { 00:11:13.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.730 "dma_device_type": 2 00:11:13.730 }, 00:11:13.730 { 00:11:13.730 "dma_device_id": "system", 00:11:13.730 "dma_device_type": 1 00:11:13.730 }, 00:11:13.730 { 00:11:13.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.730 "dma_device_type": 2 00:11:13.730 } 00:11:13.730 ], 00:11:13.730 "driver_specific": { 00:11:13.730 "raid": { 00:11:13.730 "uuid": "bd455661-46fb-4259-8f46-5f322932f30b", 00:11:13.730 "strip_size_kb": 0, 00:11:13.730 "state": "online", 00:11:13.730 "raid_level": "raid1", 00:11:13.730 "superblock": false, 00:11:13.730 "num_base_bdevs": 4, 00:11:13.730 "num_base_bdevs_discovered": 4, 00:11:13.730 "num_base_bdevs_operational": 4, 00:11:13.730 "base_bdevs_list": [ 00:11:13.730 { 00:11:13.730 "name": "BaseBdev1", 00:11:13.730 "uuid": "4b24c5e5-a680-437b-bc92-baf5367dab46", 00:11:13.730 "is_configured": true, 00:11:13.730 "data_offset": 0, 00:11:13.730 "data_size": 65536 00:11:13.730 }, 00:11:13.730 { 00:11:13.730 "name": "BaseBdev2", 00:11:13.730 "uuid": "d2fc3672-6e20-48bc-9329-28ce63ff134b", 00:11:13.730 "is_configured": true, 00:11:13.730 "data_offset": 0, 00:11:13.730 "data_size": 65536 00:11:13.730 }, 00:11:13.730 { 00:11:13.730 "name": "BaseBdev3", 00:11:13.730 "uuid": "53df3fcb-d95a-4d86-87a1-8186946629f7", 00:11:13.730 "is_configured": true, 00:11:13.730 "data_offset": 0, 00:11:13.730 "data_size": 65536 00:11:13.730 }, 00:11:13.730 { 00:11:13.730 "name": "BaseBdev4", 00:11:13.730 "uuid": "dba662ce-ee33-43c1-ae0a-a5ab17d5320a", 00:11:13.730 "is_configured": true, 00:11:13.730 "data_offset": 0, 00:11:13.730 "data_size": 65536 00:11:13.730 } 00:11:13.730 ] 00:11:13.730 } 00:11:13.730 } 00:11:13.730 }' 00:11:13.730 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:13.730 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:13.730 BaseBdev2 00:11:13.730 BaseBdev3 00:11:13.730 BaseBdev4' 00:11:13.730 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.730 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:13.730 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.731 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:13.731 10:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.731 10:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.731 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.731 10:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.731 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.731 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.731 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.731 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:13.731 10:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.731 10:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.731 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.731 10:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.731 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.731 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.731 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.731 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:13.731 10:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.731 10:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.731 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.731 10:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.990 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.991 [2024-11-19 10:22:27.579545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.991 "name": "Existed_Raid", 00:11:13.991 "uuid": "bd455661-46fb-4259-8f46-5f322932f30b", 00:11:13.991 "strip_size_kb": 0, 00:11:13.991 "state": "online", 00:11:13.991 "raid_level": "raid1", 00:11:13.991 "superblock": false, 00:11:13.991 "num_base_bdevs": 4, 00:11:13.991 "num_base_bdevs_discovered": 3, 00:11:13.991 "num_base_bdevs_operational": 3, 00:11:13.991 "base_bdevs_list": [ 00:11:13.991 { 00:11:13.991 "name": null, 00:11:13.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.991 "is_configured": false, 00:11:13.991 "data_offset": 0, 00:11:13.991 "data_size": 65536 00:11:13.991 }, 00:11:13.991 { 00:11:13.991 "name": "BaseBdev2", 00:11:13.991 "uuid": "d2fc3672-6e20-48bc-9329-28ce63ff134b", 00:11:13.991 "is_configured": true, 00:11:13.991 "data_offset": 0, 00:11:13.991 "data_size": 65536 00:11:13.991 }, 00:11:13.991 { 00:11:13.991 "name": "BaseBdev3", 00:11:13.991 "uuid": "53df3fcb-d95a-4d86-87a1-8186946629f7", 00:11:13.991 "is_configured": true, 00:11:13.991 "data_offset": 0, 00:11:13.991 "data_size": 65536 00:11:13.991 }, 00:11:13.991 { 00:11:13.991 "name": "BaseBdev4", 00:11:13.991 "uuid": "dba662ce-ee33-43c1-ae0a-a5ab17d5320a", 00:11:13.991 "is_configured": true, 00:11:13.991 "data_offset": 0, 00:11:13.991 "data_size": 65536 00:11:13.991 } 00:11:13.991 ] 00:11:13.991 }' 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.991 10:22:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.560 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:14.560 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:14.560 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.560 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:14.560 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.560 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.560 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.560 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:14.560 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:14.560 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:14.560 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.560 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.560 [2024-11-19 10:22:28.095275] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:14.560 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.560 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:14.560 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:14.560 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.560 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:14.560 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.560 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.560 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.560 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:14.560 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:14.560 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:14.560 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.560 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.560 [2024-11-19 10:22:28.244686] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:14.560 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.560 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.821 [2024-11-19 10:22:28.397414] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:14.821 [2024-11-19 10:22:28.397563] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:14.821 [2024-11-19 10:22:28.493934] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.821 [2024-11-19 10:22:28.494056] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:14.821 [2024-11-19 10:22:28.494124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.821 BaseBdev2 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.821 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.082 [ 00:11:15.082 { 00:11:15.082 "name": "BaseBdev2", 00:11:15.082 "aliases": [ 00:11:15.082 "f240f77b-bbd7-423a-96b6-c1c465a6c877" 00:11:15.082 ], 00:11:15.082 "product_name": "Malloc disk", 00:11:15.082 "block_size": 512, 00:11:15.082 "num_blocks": 65536, 00:11:15.082 "uuid": "f240f77b-bbd7-423a-96b6-c1c465a6c877", 00:11:15.082 "assigned_rate_limits": { 00:11:15.082 "rw_ios_per_sec": 0, 00:11:15.082 "rw_mbytes_per_sec": 0, 00:11:15.082 "r_mbytes_per_sec": 0, 00:11:15.082 "w_mbytes_per_sec": 0 00:11:15.082 }, 00:11:15.082 "claimed": false, 00:11:15.082 "zoned": false, 00:11:15.082 "supported_io_types": { 00:11:15.082 "read": true, 00:11:15.082 "write": true, 00:11:15.082 "unmap": true, 00:11:15.082 "flush": true, 00:11:15.082 "reset": true, 00:11:15.082 "nvme_admin": false, 00:11:15.082 "nvme_io": false, 00:11:15.082 "nvme_io_md": false, 00:11:15.082 "write_zeroes": true, 00:11:15.082 "zcopy": true, 00:11:15.082 "get_zone_info": false, 00:11:15.082 "zone_management": false, 00:11:15.082 "zone_append": false, 00:11:15.082 "compare": false, 00:11:15.082 "compare_and_write": false, 00:11:15.082 "abort": true, 00:11:15.082 "seek_hole": false, 00:11:15.082 "seek_data": false, 00:11:15.082 "copy": true, 00:11:15.082 "nvme_iov_md": false 00:11:15.082 }, 00:11:15.082 "memory_domains": [ 00:11:15.082 { 00:11:15.082 "dma_device_id": "system", 00:11:15.082 "dma_device_type": 1 00:11:15.082 }, 00:11:15.082 { 00:11:15.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.082 "dma_device_type": 2 00:11:15.082 } 00:11:15.082 ], 00:11:15.082 "driver_specific": {} 00:11:15.082 } 00:11:15.082 ] 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.082 BaseBdev3 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.082 [ 00:11:15.082 { 00:11:15.082 "name": "BaseBdev3", 00:11:15.082 "aliases": [ 00:11:15.082 "701021ce-f3d0-4441-b21b-bfe5836f1ef8" 00:11:15.082 ], 00:11:15.082 "product_name": "Malloc disk", 00:11:15.082 "block_size": 512, 00:11:15.082 "num_blocks": 65536, 00:11:15.082 "uuid": "701021ce-f3d0-4441-b21b-bfe5836f1ef8", 00:11:15.082 "assigned_rate_limits": { 00:11:15.082 "rw_ios_per_sec": 0, 00:11:15.082 "rw_mbytes_per_sec": 0, 00:11:15.082 "r_mbytes_per_sec": 0, 00:11:15.082 "w_mbytes_per_sec": 0 00:11:15.082 }, 00:11:15.082 "claimed": false, 00:11:15.082 "zoned": false, 00:11:15.082 "supported_io_types": { 00:11:15.082 "read": true, 00:11:15.082 "write": true, 00:11:15.082 "unmap": true, 00:11:15.082 "flush": true, 00:11:15.082 "reset": true, 00:11:15.082 "nvme_admin": false, 00:11:15.082 "nvme_io": false, 00:11:15.082 "nvme_io_md": false, 00:11:15.082 "write_zeroes": true, 00:11:15.082 "zcopy": true, 00:11:15.082 "get_zone_info": false, 00:11:15.082 "zone_management": false, 00:11:15.082 "zone_append": false, 00:11:15.082 "compare": false, 00:11:15.082 "compare_and_write": false, 00:11:15.082 "abort": true, 00:11:15.082 "seek_hole": false, 00:11:15.082 "seek_data": false, 00:11:15.082 "copy": true, 00:11:15.082 "nvme_iov_md": false 00:11:15.082 }, 00:11:15.082 "memory_domains": [ 00:11:15.082 { 00:11:15.082 "dma_device_id": "system", 00:11:15.082 "dma_device_type": 1 00:11:15.082 }, 00:11:15.082 { 00:11:15.082 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.082 "dma_device_type": 2 00:11:15.082 } 00:11:15.082 ], 00:11:15.082 "driver_specific": {} 00:11:15.082 } 00:11:15.082 ] 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.082 BaseBdev4 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.082 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.082 [ 00:11:15.082 { 00:11:15.082 "name": "BaseBdev4", 00:11:15.082 "aliases": [ 00:11:15.082 "3fe9d034-07bd-4eb6-9067-ac5e97abdc26" 00:11:15.082 ], 00:11:15.082 "product_name": "Malloc disk", 00:11:15.082 "block_size": 512, 00:11:15.082 "num_blocks": 65536, 00:11:15.082 "uuid": "3fe9d034-07bd-4eb6-9067-ac5e97abdc26", 00:11:15.082 "assigned_rate_limits": { 00:11:15.082 "rw_ios_per_sec": 0, 00:11:15.082 "rw_mbytes_per_sec": 0, 00:11:15.082 "r_mbytes_per_sec": 0, 00:11:15.082 "w_mbytes_per_sec": 0 00:11:15.082 }, 00:11:15.082 "claimed": false, 00:11:15.082 "zoned": false, 00:11:15.082 "supported_io_types": { 00:11:15.082 "read": true, 00:11:15.082 "write": true, 00:11:15.082 "unmap": true, 00:11:15.082 "flush": true, 00:11:15.082 "reset": true, 00:11:15.082 "nvme_admin": false, 00:11:15.082 "nvme_io": false, 00:11:15.083 "nvme_io_md": false, 00:11:15.083 "write_zeroes": true, 00:11:15.083 "zcopy": true, 00:11:15.083 "get_zone_info": false, 00:11:15.083 "zone_management": false, 00:11:15.083 "zone_append": false, 00:11:15.083 "compare": false, 00:11:15.083 "compare_and_write": false, 00:11:15.083 "abort": true, 00:11:15.083 "seek_hole": false, 00:11:15.083 "seek_data": false, 00:11:15.083 "copy": true, 00:11:15.083 "nvme_iov_md": false 00:11:15.083 }, 00:11:15.083 "memory_domains": [ 00:11:15.083 { 00:11:15.083 "dma_device_id": "system", 00:11:15.083 "dma_device_type": 1 00:11:15.083 }, 00:11:15.083 { 00:11:15.083 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.083 "dma_device_type": 2 00:11:15.083 } 00:11:15.083 ], 00:11:15.083 "driver_specific": {} 00:11:15.083 } 00:11:15.083 ] 00:11:15.083 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.083 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:15.083 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:15.083 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:15.083 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:15.083 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.083 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.083 [2024-11-19 10:22:28.793214] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:15.083 [2024-11-19 10:22:28.793313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:15.083 [2024-11-19 10:22:28.793353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:15.083 [2024-11-19 10:22:28.795105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:15.083 [2024-11-19 10:22:28.795196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:15.083 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.083 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:15.083 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.083 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.083 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.083 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.083 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.083 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.083 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.083 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.083 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.083 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.083 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.083 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.083 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.083 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.083 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.083 "name": "Existed_Raid", 00:11:15.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.083 "strip_size_kb": 0, 00:11:15.083 "state": "configuring", 00:11:15.083 "raid_level": "raid1", 00:11:15.083 "superblock": false, 00:11:15.083 "num_base_bdevs": 4, 00:11:15.083 "num_base_bdevs_discovered": 3, 00:11:15.083 "num_base_bdevs_operational": 4, 00:11:15.083 "base_bdevs_list": [ 00:11:15.083 { 00:11:15.083 "name": "BaseBdev1", 00:11:15.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.083 "is_configured": false, 00:11:15.083 "data_offset": 0, 00:11:15.083 "data_size": 0 00:11:15.083 }, 00:11:15.083 { 00:11:15.083 "name": "BaseBdev2", 00:11:15.083 "uuid": "f240f77b-bbd7-423a-96b6-c1c465a6c877", 00:11:15.083 "is_configured": true, 00:11:15.083 "data_offset": 0, 00:11:15.083 "data_size": 65536 00:11:15.083 }, 00:11:15.083 { 00:11:15.083 "name": "BaseBdev3", 00:11:15.083 "uuid": "701021ce-f3d0-4441-b21b-bfe5836f1ef8", 00:11:15.083 "is_configured": true, 00:11:15.083 "data_offset": 0, 00:11:15.083 "data_size": 65536 00:11:15.083 }, 00:11:15.083 { 00:11:15.083 "name": "BaseBdev4", 00:11:15.083 "uuid": "3fe9d034-07bd-4eb6-9067-ac5e97abdc26", 00:11:15.083 "is_configured": true, 00:11:15.083 "data_offset": 0, 00:11:15.083 "data_size": 65536 00:11:15.083 } 00:11:15.083 ] 00:11:15.083 }' 00:11:15.083 10:22:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.083 10:22:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.656 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:15.656 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.656 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.656 [2024-11-19 10:22:29.192588] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:15.656 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.656 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:15.656 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.656 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.656 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.656 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.656 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.656 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.656 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.656 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.656 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.656 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.656 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.656 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.656 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.656 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.656 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.656 "name": "Existed_Raid", 00:11:15.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.656 "strip_size_kb": 0, 00:11:15.656 "state": "configuring", 00:11:15.656 "raid_level": "raid1", 00:11:15.656 "superblock": false, 00:11:15.656 "num_base_bdevs": 4, 00:11:15.656 "num_base_bdevs_discovered": 2, 00:11:15.656 "num_base_bdevs_operational": 4, 00:11:15.656 "base_bdevs_list": [ 00:11:15.656 { 00:11:15.656 "name": "BaseBdev1", 00:11:15.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.656 "is_configured": false, 00:11:15.656 "data_offset": 0, 00:11:15.656 "data_size": 0 00:11:15.656 }, 00:11:15.656 { 00:11:15.656 "name": null, 00:11:15.656 "uuid": "f240f77b-bbd7-423a-96b6-c1c465a6c877", 00:11:15.656 "is_configured": false, 00:11:15.656 "data_offset": 0, 00:11:15.656 "data_size": 65536 00:11:15.656 }, 00:11:15.656 { 00:11:15.656 "name": "BaseBdev3", 00:11:15.656 "uuid": "701021ce-f3d0-4441-b21b-bfe5836f1ef8", 00:11:15.656 "is_configured": true, 00:11:15.656 "data_offset": 0, 00:11:15.656 "data_size": 65536 00:11:15.656 }, 00:11:15.656 { 00:11:15.656 "name": "BaseBdev4", 00:11:15.656 "uuid": "3fe9d034-07bd-4eb6-9067-ac5e97abdc26", 00:11:15.656 "is_configured": true, 00:11:15.656 "data_offset": 0, 00:11:15.656 "data_size": 65536 00:11:15.656 } 00:11:15.656 ] 00:11:15.656 }' 00:11:15.656 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.656 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.919 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.919 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.919 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.919 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:15.919 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.919 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:15.919 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:15.919 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.919 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.919 BaseBdev1 00:11:15.919 [2024-11-19 10:22:29.676464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:15.919 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.919 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:15.919 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:15.919 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:15.919 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:15.919 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:15.919 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:15.919 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:15.919 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.919 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.919 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.919 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:15.919 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.919 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.189 [ 00:11:16.189 { 00:11:16.189 "name": "BaseBdev1", 00:11:16.189 "aliases": [ 00:11:16.189 "4c325426-e264-4b2e-acf2-3aa041ff3bd9" 00:11:16.189 ], 00:11:16.189 "product_name": "Malloc disk", 00:11:16.189 "block_size": 512, 00:11:16.189 "num_blocks": 65536, 00:11:16.189 "uuid": "4c325426-e264-4b2e-acf2-3aa041ff3bd9", 00:11:16.189 "assigned_rate_limits": { 00:11:16.189 "rw_ios_per_sec": 0, 00:11:16.189 "rw_mbytes_per_sec": 0, 00:11:16.189 "r_mbytes_per_sec": 0, 00:11:16.189 "w_mbytes_per_sec": 0 00:11:16.189 }, 00:11:16.189 "claimed": true, 00:11:16.189 "claim_type": "exclusive_write", 00:11:16.189 "zoned": false, 00:11:16.189 "supported_io_types": { 00:11:16.189 "read": true, 00:11:16.189 "write": true, 00:11:16.189 "unmap": true, 00:11:16.189 "flush": true, 00:11:16.189 "reset": true, 00:11:16.189 "nvme_admin": false, 00:11:16.189 "nvme_io": false, 00:11:16.189 "nvme_io_md": false, 00:11:16.189 "write_zeroes": true, 00:11:16.189 "zcopy": true, 00:11:16.189 "get_zone_info": false, 00:11:16.189 "zone_management": false, 00:11:16.189 "zone_append": false, 00:11:16.189 "compare": false, 00:11:16.189 "compare_and_write": false, 00:11:16.189 "abort": true, 00:11:16.189 "seek_hole": false, 00:11:16.189 "seek_data": false, 00:11:16.189 "copy": true, 00:11:16.189 "nvme_iov_md": false 00:11:16.189 }, 00:11:16.189 "memory_domains": [ 00:11:16.189 { 00:11:16.189 "dma_device_id": "system", 00:11:16.189 "dma_device_type": 1 00:11:16.189 }, 00:11:16.189 { 00:11:16.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.189 "dma_device_type": 2 00:11:16.189 } 00:11:16.189 ], 00:11:16.189 "driver_specific": {} 00:11:16.189 } 00:11:16.189 ] 00:11:16.189 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.189 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:16.189 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:16.189 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.189 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.189 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.189 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.189 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.189 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.189 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.189 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.189 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.189 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.189 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.189 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.189 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.189 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.189 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.189 "name": "Existed_Raid", 00:11:16.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.189 "strip_size_kb": 0, 00:11:16.189 "state": "configuring", 00:11:16.189 "raid_level": "raid1", 00:11:16.189 "superblock": false, 00:11:16.189 "num_base_bdevs": 4, 00:11:16.189 "num_base_bdevs_discovered": 3, 00:11:16.189 "num_base_bdevs_operational": 4, 00:11:16.189 "base_bdevs_list": [ 00:11:16.189 { 00:11:16.189 "name": "BaseBdev1", 00:11:16.189 "uuid": "4c325426-e264-4b2e-acf2-3aa041ff3bd9", 00:11:16.189 "is_configured": true, 00:11:16.189 "data_offset": 0, 00:11:16.189 "data_size": 65536 00:11:16.189 }, 00:11:16.189 { 00:11:16.189 "name": null, 00:11:16.189 "uuid": "f240f77b-bbd7-423a-96b6-c1c465a6c877", 00:11:16.189 "is_configured": false, 00:11:16.189 "data_offset": 0, 00:11:16.189 "data_size": 65536 00:11:16.189 }, 00:11:16.189 { 00:11:16.189 "name": "BaseBdev3", 00:11:16.189 "uuid": "701021ce-f3d0-4441-b21b-bfe5836f1ef8", 00:11:16.189 "is_configured": true, 00:11:16.189 "data_offset": 0, 00:11:16.189 "data_size": 65536 00:11:16.189 }, 00:11:16.189 { 00:11:16.189 "name": "BaseBdev4", 00:11:16.189 "uuid": "3fe9d034-07bd-4eb6-9067-ac5e97abdc26", 00:11:16.189 "is_configured": true, 00:11:16.189 "data_offset": 0, 00:11:16.189 "data_size": 65536 00:11:16.189 } 00:11:16.189 ] 00:11:16.189 }' 00:11:16.189 10:22:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.189 10:22:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.465 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:16.465 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.465 10:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.465 10:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.465 10:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.465 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:16.465 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:16.465 10:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.465 10:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.465 [2024-11-19 10:22:30.151792] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:16.465 10:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.465 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:16.465 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.465 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.465 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.465 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.465 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.465 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.465 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.465 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.465 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.465 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.465 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.465 10:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.465 10:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.465 10:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.465 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.465 "name": "Existed_Raid", 00:11:16.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.465 "strip_size_kb": 0, 00:11:16.465 "state": "configuring", 00:11:16.465 "raid_level": "raid1", 00:11:16.465 "superblock": false, 00:11:16.465 "num_base_bdevs": 4, 00:11:16.465 "num_base_bdevs_discovered": 2, 00:11:16.465 "num_base_bdevs_operational": 4, 00:11:16.465 "base_bdevs_list": [ 00:11:16.465 { 00:11:16.465 "name": "BaseBdev1", 00:11:16.465 "uuid": "4c325426-e264-4b2e-acf2-3aa041ff3bd9", 00:11:16.465 "is_configured": true, 00:11:16.465 "data_offset": 0, 00:11:16.465 "data_size": 65536 00:11:16.465 }, 00:11:16.465 { 00:11:16.465 "name": null, 00:11:16.465 "uuid": "f240f77b-bbd7-423a-96b6-c1c465a6c877", 00:11:16.465 "is_configured": false, 00:11:16.465 "data_offset": 0, 00:11:16.465 "data_size": 65536 00:11:16.465 }, 00:11:16.465 { 00:11:16.465 "name": null, 00:11:16.465 "uuid": "701021ce-f3d0-4441-b21b-bfe5836f1ef8", 00:11:16.465 "is_configured": false, 00:11:16.465 "data_offset": 0, 00:11:16.465 "data_size": 65536 00:11:16.465 }, 00:11:16.465 { 00:11:16.465 "name": "BaseBdev4", 00:11:16.465 "uuid": "3fe9d034-07bd-4eb6-9067-ac5e97abdc26", 00:11:16.465 "is_configured": true, 00:11:16.465 "data_offset": 0, 00:11:16.465 "data_size": 65536 00:11:16.465 } 00:11:16.465 ] 00:11:16.465 }' 00:11:16.465 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.465 10:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.034 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.034 10:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.034 10:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.034 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:17.034 10:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.034 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:17.034 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:17.034 10:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.034 10:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.034 [2024-11-19 10:22:30.646981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:17.034 10:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.034 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:17.034 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.034 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.034 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.034 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.034 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.034 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.034 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.034 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.034 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.034 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.034 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.034 10:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.034 10:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.034 10:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.034 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.034 "name": "Existed_Raid", 00:11:17.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.034 "strip_size_kb": 0, 00:11:17.034 "state": "configuring", 00:11:17.034 "raid_level": "raid1", 00:11:17.034 "superblock": false, 00:11:17.034 "num_base_bdevs": 4, 00:11:17.034 "num_base_bdevs_discovered": 3, 00:11:17.034 "num_base_bdevs_operational": 4, 00:11:17.034 "base_bdevs_list": [ 00:11:17.034 { 00:11:17.034 "name": "BaseBdev1", 00:11:17.034 "uuid": "4c325426-e264-4b2e-acf2-3aa041ff3bd9", 00:11:17.034 "is_configured": true, 00:11:17.034 "data_offset": 0, 00:11:17.034 "data_size": 65536 00:11:17.034 }, 00:11:17.034 { 00:11:17.034 "name": null, 00:11:17.034 "uuid": "f240f77b-bbd7-423a-96b6-c1c465a6c877", 00:11:17.034 "is_configured": false, 00:11:17.034 "data_offset": 0, 00:11:17.034 "data_size": 65536 00:11:17.034 }, 00:11:17.034 { 00:11:17.034 "name": "BaseBdev3", 00:11:17.034 "uuid": "701021ce-f3d0-4441-b21b-bfe5836f1ef8", 00:11:17.034 "is_configured": true, 00:11:17.034 "data_offset": 0, 00:11:17.034 "data_size": 65536 00:11:17.034 }, 00:11:17.034 { 00:11:17.034 "name": "BaseBdev4", 00:11:17.034 "uuid": "3fe9d034-07bd-4eb6-9067-ac5e97abdc26", 00:11:17.034 "is_configured": true, 00:11:17.034 "data_offset": 0, 00:11:17.034 "data_size": 65536 00:11:17.034 } 00:11:17.034 ] 00:11:17.034 }' 00:11:17.034 10:22:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.034 10:22:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.604 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:17.605 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.605 10:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.605 10:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.605 10:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.605 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:17.605 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:17.605 10:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.605 10:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.605 [2024-11-19 10:22:31.190058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:17.605 10:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.605 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:17.605 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.605 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.605 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.605 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.605 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.605 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.605 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.605 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.605 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.605 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.605 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.605 10:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.605 10:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.605 10:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.605 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.605 "name": "Existed_Raid", 00:11:17.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.605 "strip_size_kb": 0, 00:11:17.605 "state": "configuring", 00:11:17.605 "raid_level": "raid1", 00:11:17.605 "superblock": false, 00:11:17.605 "num_base_bdevs": 4, 00:11:17.605 "num_base_bdevs_discovered": 2, 00:11:17.605 "num_base_bdevs_operational": 4, 00:11:17.605 "base_bdevs_list": [ 00:11:17.605 { 00:11:17.605 "name": null, 00:11:17.605 "uuid": "4c325426-e264-4b2e-acf2-3aa041ff3bd9", 00:11:17.605 "is_configured": false, 00:11:17.605 "data_offset": 0, 00:11:17.605 "data_size": 65536 00:11:17.605 }, 00:11:17.605 { 00:11:17.605 "name": null, 00:11:17.605 "uuid": "f240f77b-bbd7-423a-96b6-c1c465a6c877", 00:11:17.605 "is_configured": false, 00:11:17.605 "data_offset": 0, 00:11:17.605 "data_size": 65536 00:11:17.605 }, 00:11:17.605 { 00:11:17.605 "name": "BaseBdev3", 00:11:17.605 "uuid": "701021ce-f3d0-4441-b21b-bfe5836f1ef8", 00:11:17.605 "is_configured": true, 00:11:17.605 "data_offset": 0, 00:11:17.605 "data_size": 65536 00:11:17.605 }, 00:11:17.605 { 00:11:17.605 "name": "BaseBdev4", 00:11:17.605 "uuid": "3fe9d034-07bd-4eb6-9067-ac5e97abdc26", 00:11:17.605 "is_configured": true, 00:11:17.605 "data_offset": 0, 00:11:17.605 "data_size": 65536 00:11:17.605 } 00:11:17.605 ] 00:11:17.605 }' 00:11:17.605 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.605 10:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.175 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.175 10:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.175 10:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.175 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:18.175 10:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.175 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:18.175 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:18.175 10:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.175 10:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.175 [2024-11-19 10:22:31.756849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:18.175 10:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.175 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:18.175 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.175 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.175 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.175 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.175 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.175 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.175 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.175 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.175 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.175 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.175 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.175 10:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.175 10:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.175 10:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.175 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.175 "name": "Existed_Raid", 00:11:18.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.175 "strip_size_kb": 0, 00:11:18.175 "state": "configuring", 00:11:18.175 "raid_level": "raid1", 00:11:18.175 "superblock": false, 00:11:18.175 "num_base_bdevs": 4, 00:11:18.175 "num_base_bdevs_discovered": 3, 00:11:18.175 "num_base_bdevs_operational": 4, 00:11:18.175 "base_bdevs_list": [ 00:11:18.175 { 00:11:18.175 "name": null, 00:11:18.175 "uuid": "4c325426-e264-4b2e-acf2-3aa041ff3bd9", 00:11:18.175 "is_configured": false, 00:11:18.175 "data_offset": 0, 00:11:18.175 "data_size": 65536 00:11:18.175 }, 00:11:18.175 { 00:11:18.175 "name": "BaseBdev2", 00:11:18.175 "uuid": "f240f77b-bbd7-423a-96b6-c1c465a6c877", 00:11:18.175 "is_configured": true, 00:11:18.175 "data_offset": 0, 00:11:18.175 "data_size": 65536 00:11:18.175 }, 00:11:18.175 { 00:11:18.175 "name": "BaseBdev3", 00:11:18.175 "uuid": "701021ce-f3d0-4441-b21b-bfe5836f1ef8", 00:11:18.175 "is_configured": true, 00:11:18.175 "data_offset": 0, 00:11:18.175 "data_size": 65536 00:11:18.175 }, 00:11:18.175 { 00:11:18.175 "name": "BaseBdev4", 00:11:18.175 "uuid": "3fe9d034-07bd-4eb6-9067-ac5e97abdc26", 00:11:18.175 "is_configured": true, 00:11:18.175 "data_offset": 0, 00:11:18.175 "data_size": 65536 00:11:18.175 } 00:11:18.175 ] 00:11:18.175 }' 00:11:18.175 10:22:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.175 10:22:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.435 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.435 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.435 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4c325426-e264-4b2e-acf2-3aa041ff3bd9 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.697 [2024-11-19 10:22:32.324821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:18.697 [2024-11-19 10:22:32.324950] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:18.697 [2024-11-19 10:22:32.324980] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:18.697 [2024-11-19 10:22:32.325320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:18.697 [2024-11-19 10:22:32.325526] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:18.697 [2024-11-19 10:22:32.325569] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:18.697 [2024-11-19 10:22:32.325880] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.697 NewBaseBdev 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.697 [ 00:11:18.697 { 00:11:18.697 "name": "NewBaseBdev", 00:11:18.697 "aliases": [ 00:11:18.697 "4c325426-e264-4b2e-acf2-3aa041ff3bd9" 00:11:18.697 ], 00:11:18.697 "product_name": "Malloc disk", 00:11:18.697 "block_size": 512, 00:11:18.697 "num_blocks": 65536, 00:11:18.697 "uuid": "4c325426-e264-4b2e-acf2-3aa041ff3bd9", 00:11:18.697 "assigned_rate_limits": { 00:11:18.697 "rw_ios_per_sec": 0, 00:11:18.697 "rw_mbytes_per_sec": 0, 00:11:18.697 "r_mbytes_per_sec": 0, 00:11:18.697 "w_mbytes_per_sec": 0 00:11:18.697 }, 00:11:18.697 "claimed": true, 00:11:18.697 "claim_type": "exclusive_write", 00:11:18.697 "zoned": false, 00:11:18.697 "supported_io_types": { 00:11:18.697 "read": true, 00:11:18.697 "write": true, 00:11:18.697 "unmap": true, 00:11:18.697 "flush": true, 00:11:18.697 "reset": true, 00:11:18.697 "nvme_admin": false, 00:11:18.697 "nvme_io": false, 00:11:18.697 "nvme_io_md": false, 00:11:18.697 "write_zeroes": true, 00:11:18.697 "zcopy": true, 00:11:18.697 "get_zone_info": false, 00:11:18.697 "zone_management": false, 00:11:18.697 "zone_append": false, 00:11:18.697 "compare": false, 00:11:18.697 "compare_and_write": false, 00:11:18.697 "abort": true, 00:11:18.697 "seek_hole": false, 00:11:18.697 "seek_data": false, 00:11:18.697 "copy": true, 00:11:18.697 "nvme_iov_md": false 00:11:18.697 }, 00:11:18.697 "memory_domains": [ 00:11:18.697 { 00:11:18.697 "dma_device_id": "system", 00:11:18.697 "dma_device_type": 1 00:11:18.697 }, 00:11:18.697 { 00:11:18.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.697 "dma_device_type": 2 00:11:18.697 } 00:11:18.697 ], 00:11:18.697 "driver_specific": {} 00:11:18.697 } 00:11:18.697 ] 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.697 "name": "Existed_Raid", 00:11:18.697 "uuid": "ebd72244-8f06-456e-b630-d3591aba87de", 00:11:18.697 "strip_size_kb": 0, 00:11:18.697 "state": "online", 00:11:18.697 "raid_level": "raid1", 00:11:18.697 "superblock": false, 00:11:18.697 "num_base_bdevs": 4, 00:11:18.697 "num_base_bdevs_discovered": 4, 00:11:18.697 "num_base_bdevs_operational": 4, 00:11:18.697 "base_bdevs_list": [ 00:11:18.697 { 00:11:18.697 "name": "NewBaseBdev", 00:11:18.697 "uuid": "4c325426-e264-4b2e-acf2-3aa041ff3bd9", 00:11:18.697 "is_configured": true, 00:11:18.697 "data_offset": 0, 00:11:18.697 "data_size": 65536 00:11:18.697 }, 00:11:18.697 { 00:11:18.697 "name": "BaseBdev2", 00:11:18.697 "uuid": "f240f77b-bbd7-423a-96b6-c1c465a6c877", 00:11:18.697 "is_configured": true, 00:11:18.697 "data_offset": 0, 00:11:18.697 "data_size": 65536 00:11:18.697 }, 00:11:18.697 { 00:11:18.697 "name": "BaseBdev3", 00:11:18.697 "uuid": "701021ce-f3d0-4441-b21b-bfe5836f1ef8", 00:11:18.697 "is_configured": true, 00:11:18.697 "data_offset": 0, 00:11:18.697 "data_size": 65536 00:11:18.697 }, 00:11:18.697 { 00:11:18.697 "name": "BaseBdev4", 00:11:18.697 "uuid": "3fe9d034-07bd-4eb6-9067-ac5e97abdc26", 00:11:18.697 "is_configured": true, 00:11:18.697 "data_offset": 0, 00:11:18.697 "data_size": 65536 00:11:18.697 } 00:11:18.697 ] 00:11:18.697 }' 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.697 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.267 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:19.267 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:19.267 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:19.267 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:19.267 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:19.267 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:19.267 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:19.267 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.267 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.267 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:19.267 [2024-11-19 10:22:32.792469] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:19.267 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.267 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:19.267 "name": "Existed_Raid", 00:11:19.267 "aliases": [ 00:11:19.267 "ebd72244-8f06-456e-b630-d3591aba87de" 00:11:19.267 ], 00:11:19.267 "product_name": "Raid Volume", 00:11:19.267 "block_size": 512, 00:11:19.267 "num_blocks": 65536, 00:11:19.267 "uuid": "ebd72244-8f06-456e-b630-d3591aba87de", 00:11:19.267 "assigned_rate_limits": { 00:11:19.267 "rw_ios_per_sec": 0, 00:11:19.267 "rw_mbytes_per_sec": 0, 00:11:19.267 "r_mbytes_per_sec": 0, 00:11:19.267 "w_mbytes_per_sec": 0 00:11:19.267 }, 00:11:19.267 "claimed": false, 00:11:19.267 "zoned": false, 00:11:19.267 "supported_io_types": { 00:11:19.267 "read": true, 00:11:19.267 "write": true, 00:11:19.267 "unmap": false, 00:11:19.267 "flush": false, 00:11:19.267 "reset": true, 00:11:19.267 "nvme_admin": false, 00:11:19.267 "nvme_io": false, 00:11:19.267 "nvme_io_md": false, 00:11:19.267 "write_zeroes": true, 00:11:19.267 "zcopy": false, 00:11:19.267 "get_zone_info": false, 00:11:19.267 "zone_management": false, 00:11:19.267 "zone_append": false, 00:11:19.267 "compare": false, 00:11:19.267 "compare_and_write": false, 00:11:19.267 "abort": false, 00:11:19.267 "seek_hole": false, 00:11:19.267 "seek_data": false, 00:11:19.267 "copy": false, 00:11:19.267 "nvme_iov_md": false 00:11:19.267 }, 00:11:19.267 "memory_domains": [ 00:11:19.267 { 00:11:19.267 "dma_device_id": "system", 00:11:19.267 "dma_device_type": 1 00:11:19.267 }, 00:11:19.267 { 00:11:19.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.267 "dma_device_type": 2 00:11:19.267 }, 00:11:19.267 { 00:11:19.267 "dma_device_id": "system", 00:11:19.267 "dma_device_type": 1 00:11:19.267 }, 00:11:19.267 { 00:11:19.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.267 "dma_device_type": 2 00:11:19.267 }, 00:11:19.267 { 00:11:19.267 "dma_device_id": "system", 00:11:19.267 "dma_device_type": 1 00:11:19.267 }, 00:11:19.267 { 00:11:19.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.267 "dma_device_type": 2 00:11:19.267 }, 00:11:19.267 { 00:11:19.268 "dma_device_id": "system", 00:11:19.268 "dma_device_type": 1 00:11:19.268 }, 00:11:19.268 { 00:11:19.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.268 "dma_device_type": 2 00:11:19.268 } 00:11:19.268 ], 00:11:19.268 "driver_specific": { 00:11:19.268 "raid": { 00:11:19.268 "uuid": "ebd72244-8f06-456e-b630-d3591aba87de", 00:11:19.268 "strip_size_kb": 0, 00:11:19.268 "state": "online", 00:11:19.268 "raid_level": "raid1", 00:11:19.268 "superblock": false, 00:11:19.268 "num_base_bdevs": 4, 00:11:19.268 "num_base_bdevs_discovered": 4, 00:11:19.268 "num_base_bdevs_operational": 4, 00:11:19.268 "base_bdevs_list": [ 00:11:19.268 { 00:11:19.268 "name": "NewBaseBdev", 00:11:19.268 "uuid": "4c325426-e264-4b2e-acf2-3aa041ff3bd9", 00:11:19.268 "is_configured": true, 00:11:19.268 "data_offset": 0, 00:11:19.268 "data_size": 65536 00:11:19.268 }, 00:11:19.268 { 00:11:19.268 "name": "BaseBdev2", 00:11:19.268 "uuid": "f240f77b-bbd7-423a-96b6-c1c465a6c877", 00:11:19.268 "is_configured": true, 00:11:19.268 "data_offset": 0, 00:11:19.268 "data_size": 65536 00:11:19.268 }, 00:11:19.268 { 00:11:19.268 "name": "BaseBdev3", 00:11:19.268 "uuid": "701021ce-f3d0-4441-b21b-bfe5836f1ef8", 00:11:19.268 "is_configured": true, 00:11:19.268 "data_offset": 0, 00:11:19.268 "data_size": 65536 00:11:19.268 }, 00:11:19.268 { 00:11:19.268 "name": "BaseBdev4", 00:11:19.268 "uuid": "3fe9d034-07bd-4eb6-9067-ac5e97abdc26", 00:11:19.268 "is_configured": true, 00:11:19.268 "data_offset": 0, 00:11:19.268 "data_size": 65536 00:11:19.268 } 00:11:19.268 ] 00:11:19.268 } 00:11:19.268 } 00:11:19.268 }' 00:11:19.268 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:19.268 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:19.268 BaseBdev2 00:11:19.268 BaseBdev3 00:11:19.268 BaseBdev4' 00:11:19.268 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.268 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:19.268 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.268 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:19.268 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.268 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.268 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.268 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.268 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.268 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.268 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.268 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:19.268 10:22:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.268 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.268 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.268 10:22:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.268 10:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.268 10:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.268 10:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.268 10:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:19.268 10:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.268 10:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.268 10:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.268 10:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.528 10:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.528 10:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.528 10:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.528 10:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:19.528 10:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.528 10:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.528 10:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.528 10:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.528 10:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.528 10:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.528 10:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:19.528 10:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.528 10:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.528 [2024-11-19 10:22:33.119514] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:19.528 [2024-11-19 10:22:33.119544] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:19.528 [2024-11-19 10:22:33.119635] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.528 [2024-11-19 10:22:33.119920] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:19.528 [2024-11-19 10:22:33.119934] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:19.528 10:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.528 10:22:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72942 00:11:19.528 10:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 72942 ']' 00:11:19.528 10:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 72942 00:11:19.528 10:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:19.528 10:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.528 10:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72942 00:11:19.528 killing process with pid 72942 00:11:19.528 10:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:19.528 10:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:19.528 10:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72942' 00:11:19.528 10:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 72942 00:11:19.528 [2024-11-19 10:22:33.156002] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:19.528 10:22:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 72942 00:11:19.788 [2024-11-19 10:22:33.536411] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:21.168 10:22:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:21.168 00:11:21.168 real 0m11.231s 00:11:21.168 user 0m17.825s 00:11:21.168 sys 0m1.986s 00:11:21.168 10:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.168 10:22:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.168 ************************************ 00:11:21.168 END TEST raid_state_function_test 00:11:21.168 ************************************ 00:11:21.168 10:22:34 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:21.168 10:22:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:21.168 10:22:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.168 10:22:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:21.168 ************************************ 00:11:21.168 START TEST raid_state_function_test_sb 00:11:21.169 ************************************ 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73608 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73608' 00:11:21.169 Process raid pid: 73608 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73608 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73608 ']' 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.169 10:22:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.169 [2024-11-19 10:22:34.762033] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:11:21.169 [2024-11-19 10:22:34.762239] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.169 [2024-11-19 10:22:34.922348] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.429 [2024-11-19 10:22:35.036411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.689 [2024-11-19 10:22:35.237297] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:21.689 [2024-11-19 10:22:35.237356] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:21.948 10:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.948 10:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:21.948 10:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:21.948 10:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.949 10:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.949 [2024-11-19 10:22:35.584067] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:21.949 [2024-11-19 10:22:35.584167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:21.949 [2024-11-19 10:22:35.584198] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:21.949 [2024-11-19 10:22:35.584223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:21.949 [2024-11-19 10:22:35.584242] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:21.949 [2024-11-19 10:22:35.584264] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:21.949 [2024-11-19 10:22:35.584282] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:21.949 [2024-11-19 10:22:35.584303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:21.949 10:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.949 10:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:21.949 10:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.949 10:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.949 10:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.949 10:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.949 10:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.949 10:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.949 10:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.949 10:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.949 10:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.949 10:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.949 10:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.949 10:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.949 10:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.949 10:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.949 10:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.949 "name": "Existed_Raid", 00:11:21.949 "uuid": "6e2c41f3-2c42-44be-93bf-e1953e3e7f88", 00:11:21.949 "strip_size_kb": 0, 00:11:21.949 "state": "configuring", 00:11:21.949 "raid_level": "raid1", 00:11:21.949 "superblock": true, 00:11:21.949 "num_base_bdevs": 4, 00:11:21.949 "num_base_bdevs_discovered": 0, 00:11:21.949 "num_base_bdevs_operational": 4, 00:11:21.949 "base_bdevs_list": [ 00:11:21.949 { 00:11:21.949 "name": "BaseBdev1", 00:11:21.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.949 "is_configured": false, 00:11:21.949 "data_offset": 0, 00:11:21.949 "data_size": 0 00:11:21.949 }, 00:11:21.949 { 00:11:21.949 "name": "BaseBdev2", 00:11:21.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.949 "is_configured": false, 00:11:21.949 "data_offset": 0, 00:11:21.949 "data_size": 0 00:11:21.949 }, 00:11:21.949 { 00:11:21.949 "name": "BaseBdev3", 00:11:21.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.949 "is_configured": false, 00:11:21.949 "data_offset": 0, 00:11:21.949 "data_size": 0 00:11:21.949 }, 00:11:21.949 { 00:11:21.949 "name": "BaseBdev4", 00:11:21.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.949 "is_configured": false, 00:11:21.949 "data_offset": 0, 00:11:21.949 "data_size": 0 00:11:21.949 } 00:11:21.949 ] 00:11:21.949 }' 00:11:21.949 10:22:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.949 10:22:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.519 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:22.519 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.519 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.519 [2024-11-19 10:22:36.011300] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:22.519 [2024-11-19 10:22:36.011339] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:22.519 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.519 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:22.519 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.519 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.519 [2024-11-19 10:22:36.019276] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:22.519 [2024-11-19 10:22:36.019317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:22.519 [2024-11-19 10:22:36.019326] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:22.519 [2024-11-19 10:22:36.019334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:22.519 [2024-11-19 10:22:36.019340] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:22.519 [2024-11-19 10:22:36.019348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:22.519 [2024-11-19 10:22:36.019354] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:22.519 [2024-11-19 10:22:36.019362] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:22.519 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.519 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:22.519 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.519 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.519 [2024-11-19 10:22:36.063056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:22.519 BaseBdev1 00:11:22.519 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.519 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:22.519 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:22.519 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.519 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:22.519 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.519 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.519 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.519 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.519 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.519 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.519 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:22.519 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.519 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.519 [ 00:11:22.519 { 00:11:22.519 "name": "BaseBdev1", 00:11:22.519 "aliases": [ 00:11:22.519 "5fa014d3-e935-486a-aa41-916976f10d14" 00:11:22.519 ], 00:11:22.519 "product_name": "Malloc disk", 00:11:22.519 "block_size": 512, 00:11:22.519 "num_blocks": 65536, 00:11:22.519 "uuid": "5fa014d3-e935-486a-aa41-916976f10d14", 00:11:22.519 "assigned_rate_limits": { 00:11:22.519 "rw_ios_per_sec": 0, 00:11:22.519 "rw_mbytes_per_sec": 0, 00:11:22.519 "r_mbytes_per_sec": 0, 00:11:22.519 "w_mbytes_per_sec": 0 00:11:22.519 }, 00:11:22.519 "claimed": true, 00:11:22.519 "claim_type": "exclusive_write", 00:11:22.519 "zoned": false, 00:11:22.519 "supported_io_types": { 00:11:22.519 "read": true, 00:11:22.519 "write": true, 00:11:22.519 "unmap": true, 00:11:22.519 "flush": true, 00:11:22.519 "reset": true, 00:11:22.519 "nvme_admin": false, 00:11:22.519 "nvme_io": false, 00:11:22.519 "nvme_io_md": false, 00:11:22.519 "write_zeroes": true, 00:11:22.519 "zcopy": true, 00:11:22.519 "get_zone_info": false, 00:11:22.519 "zone_management": false, 00:11:22.519 "zone_append": false, 00:11:22.519 "compare": false, 00:11:22.519 "compare_and_write": false, 00:11:22.519 "abort": true, 00:11:22.519 "seek_hole": false, 00:11:22.519 "seek_data": false, 00:11:22.519 "copy": true, 00:11:22.519 "nvme_iov_md": false 00:11:22.519 }, 00:11:22.520 "memory_domains": [ 00:11:22.520 { 00:11:22.520 "dma_device_id": "system", 00:11:22.520 "dma_device_type": 1 00:11:22.520 }, 00:11:22.520 { 00:11:22.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.520 "dma_device_type": 2 00:11:22.520 } 00:11:22.520 ], 00:11:22.520 "driver_specific": {} 00:11:22.520 } 00:11:22.520 ] 00:11:22.520 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.520 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:22.520 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:22.520 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.520 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.520 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.520 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.520 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.520 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.520 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.520 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.520 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.520 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.520 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.520 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.520 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.520 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.520 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.520 "name": "Existed_Raid", 00:11:22.520 "uuid": "18abbc0c-b92e-4962-82f7-01c056e2dbf5", 00:11:22.520 "strip_size_kb": 0, 00:11:22.520 "state": "configuring", 00:11:22.520 "raid_level": "raid1", 00:11:22.520 "superblock": true, 00:11:22.520 "num_base_bdevs": 4, 00:11:22.520 "num_base_bdevs_discovered": 1, 00:11:22.520 "num_base_bdevs_operational": 4, 00:11:22.520 "base_bdevs_list": [ 00:11:22.520 { 00:11:22.520 "name": "BaseBdev1", 00:11:22.520 "uuid": "5fa014d3-e935-486a-aa41-916976f10d14", 00:11:22.520 "is_configured": true, 00:11:22.520 "data_offset": 2048, 00:11:22.520 "data_size": 63488 00:11:22.520 }, 00:11:22.520 { 00:11:22.520 "name": "BaseBdev2", 00:11:22.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.520 "is_configured": false, 00:11:22.520 "data_offset": 0, 00:11:22.520 "data_size": 0 00:11:22.520 }, 00:11:22.520 { 00:11:22.520 "name": "BaseBdev3", 00:11:22.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.520 "is_configured": false, 00:11:22.520 "data_offset": 0, 00:11:22.520 "data_size": 0 00:11:22.520 }, 00:11:22.520 { 00:11:22.520 "name": "BaseBdev4", 00:11:22.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.520 "is_configured": false, 00:11:22.520 "data_offset": 0, 00:11:22.520 "data_size": 0 00:11:22.520 } 00:11:22.520 ] 00:11:22.520 }' 00:11:22.520 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.520 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.781 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:22.781 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.781 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.781 [2024-11-19 10:22:36.470374] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:22.781 [2024-11-19 10:22:36.470477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:22.781 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.781 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:22.781 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.781 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.781 [2024-11-19 10:22:36.478404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:22.781 [2024-11-19 10:22:36.480256] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:22.781 [2024-11-19 10:22:36.480335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:22.781 [2024-11-19 10:22:36.480364] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:22.781 [2024-11-19 10:22:36.480390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:22.781 [2024-11-19 10:22:36.480410] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:22.781 [2024-11-19 10:22:36.480431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:22.781 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.781 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:22.781 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:22.781 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:22.781 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.781 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.781 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.781 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.781 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.781 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.781 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.781 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.781 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.781 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.781 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.781 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.781 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.781 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.781 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.781 "name": "Existed_Raid", 00:11:22.781 "uuid": "1299d02f-97ae-43c5-abe9-08dd982e60b8", 00:11:22.781 "strip_size_kb": 0, 00:11:22.781 "state": "configuring", 00:11:22.781 "raid_level": "raid1", 00:11:22.781 "superblock": true, 00:11:22.781 "num_base_bdevs": 4, 00:11:22.781 "num_base_bdevs_discovered": 1, 00:11:22.781 "num_base_bdevs_operational": 4, 00:11:22.781 "base_bdevs_list": [ 00:11:22.781 { 00:11:22.781 "name": "BaseBdev1", 00:11:22.781 "uuid": "5fa014d3-e935-486a-aa41-916976f10d14", 00:11:22.781 "is_configured": true, 00:11:22.781 "data_offset": 2048, 00:11:22.781 "data_size": 63488 00:11:22.781 }, 00:11:22.781 { 00:11:22.781 "name": "BaseBdev2", 00:11:22.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.781 "is_configured": false, 00:11:22.781 "data_offset": 0, 00:11:22.781 "data_size": 0 00:11:22.781 }, 00:11:22.781 { 00:11:22.781 "name": "BaseBdev3", 00:11:22.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.781 "is_configured": false, 00:11:22.781 "data_offset": 0, 00:11:22.781 "data_size": 0 00:11:22.781 }, 00:11:22.781 { 00:11:22.781 "name": "BaseBdev4", 00:11:22.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.781 "is_configured": false, 00:11:22.781 "data_offset": 0, 00:11:22.781 "data_size": 0 00:11:22.781 } 00:11:22.781 ] 00:11:22.781 }' 00:11:22.781 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.781 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.352 [2024-11-19 10:22:36.920706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:23.352 BaseBdev2 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.352 [ 00:11:23.352 { 00:11:23.352 "name": "BaseBdev2", 00:11:23.352 "aliases": [ 00:11:23.352 "10187101-d153-4d91-9373-589c66a9f39f" 00:11:23.352 ], 00:11:23.352 "product_name": "Malloc disk", 00:11:23.352 "block_size": 512, 00:11:23.352 "num_blocks": 65536, 00:11:23.352 "uuid": "10187101-d153-4d91-9373-589c66a9f39f", 00:11:23.352 "assigned_rate_limits": { 00:11:23.352 "rw_ios_per_sec": 0, 00:11:23.352 "rw_mbytes_per_sec": 0, 00:11:23.352 "r_mbytes_per_sec": 0, 00:11:23.352 "w_mbytes_per_sec": 0 00:11:23.352 }, 00:11:23.352 "claimed": true, 00:11:23.352 "claim_type": "exclusive_write", 00:11:23.352 "zoned": false, 00:11:23.352 "supported_io_types": { 00:11:23.352 "read": true, 00:11:23.352 "write": true, 00:11:23.352 "unmap": true, 00:11:23.352 "flush": true, 00:11:23.352 "reset": true, 00:11:23.352 "nvme_admin": false, 00:11:23.352 "nvme_io": false, 00:11:23.352 "nvme_io_md": false, 00:11:23.352 "write_zeroes": true, 00:11:23.352 "zcopy": true, 00:11:23.352 "get_zone_info": false, 00:11:23.352 "zone_management": false, 00:11:23.352 "zone_append": false, 00:11:23.352 "compare": false, 00:11:23.352 "compare_and_write": false, 00:11:23.352 "abort": true, 00:11:23.352 "seek_hole": false, 00:11:23.352 "seek_data": false, 00:11:23.352 "copy": true, 00:11:23.352 "nvme_iov_md": false 00:11:23.352 }, 00:11:23.352 "memory_domains": [ 00:11:23.352 { 00:11:23.352 "dma_device_id": "system", 00:11:23.352 "dma_device_type": 1 00:11:23.352 }, 00:11:23.352 { 00:11:23.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.352 "dma_device_type": 2 00:11:23.352 } 00:11:23.352 ], 00:11:23.352 "driver_specific": {} 00:11:23.352 } 00:11:23.352 ] 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.352 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.353 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.353 "name": "Existed_Raid", 00:11:23.353 "uuid": "1299d02f-97ae-43c5-abe9-08dd982e60b8", 00:11:23.353 "strip_size_kb": 0, 00:11:23.353 "state": "configuring", 00:11:23.353 "raid_level": "raid1", 00:11:23.353 "superblock": true, 00:11:23.353 "num_base_bdevs": 4, 00:11:23.353 "num_base_bdevs_discovered": 2, 00:11:23.353 "num_base_bdevs_operational": 4, 00:11:23.353 "base_bdevs_list": [ 00:11:23.353 { 00:11:23.353 "name": "BaseBdev1", 00:11:23.353 "uuid": "5fa014d3-e935-486a-aa41-916976f10d14", 00:11:23.353 "is_configured": true, 00:11:23.353 "data_offset": 2048, 00:11:23.353 "data_size": 63488 00:11:23.353 }, 00:11:23.353 { 00:11:23.353 "name": "BaseBdev2", 00:11:23.353 "uuid": "10187101-d153-4d91-9373-589c66a9f39f", 00:11:23.353 "is_configured": true, 00:11:23.353 "data_offset": 2048, 00:11:23.353 "data_size": 63488 00:11:23.353 }, 00:11:23.353 { 00:11:23.353 "name": "BaseBdev3", 00:11:23.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.353 "is_configured": false, 00:11:23.353 "data_offset": 0, 00:11:23.353 "data_size": 0 00:11:23.353 }, 00:11:23.353 { 00:11:23.353 "name": "BaseBdev4", 00:11:23.353 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.353 "is_configured": false, 00:11:23.353 "data_offset": 0, 00:11:23.353 "data_size": 0 00:11:23.353 } 00:11:23.353 ] 00:11:23.353 }' 00:11:23.353 10:22:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.353 10:22:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.613 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:23.613 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.613 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.873 [2024-11-19 10:22:37.410331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:23.873 BaseBdev3 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.873 [ 00:11:23.873 { 00:11:23.873 "name": "BaseBdev3", 00:11:23.873 "aliases": [ 00:11:23.873 "c34b4594-fb93-4dfa-89a6-b73fa0142921" 00:11:23.873 ], 00:11:23.873 "product_name": "Malloc disk", 00:11:23.873 "block_size": 512, 00:11:23.873 "num_blocks": 65536, 00:11:23.873 "uuid": "c34b4594-fb93-4dfa-89a6-b73fa0142921", 00:11:23.873 "assigned_rate_limits": { 00:11:23.873 "rw_ios_per_sec": 0, 00:11:23.873 "rw_mbytes_per_sec": 0, 00:11:23.873 "r_mbytes_per_sec": 0, 00:11:23.873 "w_mbytes_per_sec": 0 00:11:23.873 }, 00:11:23.873 "claimed": true, 00:11:23.873 "claim_type": "exclusive_write", 00:11:23.873 "zoned": false, 00:11:23.873 "supported_io_types": { 00:11:23.873 "read": true, 00:11:23.873 "write": true, 00:11:23.873 "unmap": true, 00:11:23.873 "flush": true, 00:11:23.873 "reset": true, 00:11:23.873 "nvme_admin": false, 00:11:23.873 "nvme_io": false, 00:11:23.873 "nvme_io_md": false, 00:11:23.873 "write_zeroes": true, 00:11:23.873 "zcopy": true, 00:11:23.873 "get_zone_info": false, 00:11:23.873 "zone_management": false, 00:11:23.873 "zone_append": false, 00:11:23.873 "compare": false, 00:11:23.873 "compare_and_write": false, 00:11:23.873 "abort": true, 00:11:23.873 "seek_hole": false, 00:11:23.873 "seek_data": false, 00:11:23.873 "copy": true, 00:11:23.873 "nvme_iov_md": false 00:11:23.873 }, 00:11:23.873 "memory_domains": [ 00:11:23.873 { 00:11:23.873 "dma_device_id": "system", 00:11:23.873 "dma_device_type": 1 00:11:23.873 }, 00:11:23.873 { 00:11:23.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.873 "dma_device_type": 2 00:11:23.873 } 00:11:23.873 ], 00:11:23.873 "driver_specific": {} 00:11:23.873 } 00:11:23.873 ] 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.873 "name": "Existed_Raid", 00:11:23.873 "uuid": "1299d02f-97ae-43c5-abe9-08dd982e60b8", 00:11:23.873 "strip_size_kb": 0, 00:11:23.873 "state": "configuring", 00:11:23.873 "raid_level": "raid1", 00:11:23.873 "superblock": true, 00:11:23.873 "num_base_bdevs": 4, 00:11:23.873 "num_base_bdevs_discovered": 3, 00:11:23.873 "num_base_bdevs_operational": 4, 00:11:23.873 "base_bdevs_list": [ 00:11:23.873 { 00:11:23.873 "name": "BaseBdev1", 00:11:23.873 "uuid": "5fa014d3-e935-486a-aa41-916976f10d14", 00:11:23.873 "is_configured": true, 00:11:23.873 "data_offset": 2048, 00:11:23.873 "data_size": 63488 00:11:23.873 }, 00:11:23.873 { 00:11:23.873 "name": "BaseBdev2", 00:11:23.873 "uuid": "10187101-d153-4d91-9373-589c66a9f39f", 00:11:23.873 "is_configured": true, 00:11:23.873 "data_offset": 2048, 00:11:23.873 "data_size": 63488 00:11:23.873 }, 00:11:23.873 { 00:11:23.873 "name": "BaseBdev3", 00:11:23.873 "uuid": "c34b4594-fb93-4dfa-89a6-b73fa0142921", 00:11:23.873 "is_configured": true, 00:11:23.873 "data_offset": 2048, 00:11:23.873 "data_size": 63488 00:11:23.873 }, 00:11:23.873 { 00:11:23.873 "name": "BaseBdev4", 00:11:23.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.873 "is_configured": false, 00:11:23.873 "data_offset": 0, 00:11:23.873 "data_size": 0 00:11:23.873 } 00:11:23.873 ] 00:11:23.873 }' 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.873 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.134 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:24.134 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.134 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.134 [2024-11-19 10:22:37.909568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:24.134 [2024-11-19 10:22:37.909919] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:24.134 [2024-11-19 10:22:37.909970] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:24.134 [2024-11-19 10:22:37.910270] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:24.134 [2024-11-19 10:22:37.910467] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:24.134 [2024-11-19 10:22:37.910514] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:24.134 BaseBdev4 00:11:24.134 [2024-11-19 10:22:37.910687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.134 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.134 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:24.134 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:24.134 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:24.394 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:24.394 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:24.394 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:24.394 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:24.394 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.394 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.394 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.394 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:24.394 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.394 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.394 [ 00:11:24.394 { 00:11:24.394 "name": "BaseBdev4", 00:11:24.394 "aliases": [ 00:11:24.394 "2f92ce53-35be-4adc-96bd-a0a3620ff3aa" 00:11:24.394 ], 00:11:24.394 "product_name": "Malloc disk", 00:11:24.394 "block_size": 512, 00:11:24.394 "num_blocks": 65536, 00:11:24.394 "uuid": "2f92ce53-35be-4adc-96bd-a0a3620ff3aa", 00:11:24.394 "assigned_rate_limits": { 00:11:24.394 "rw_ios_per_sec": 0, 00:11:24.394 "rw_mbytes_per_sec": 0, 00:11:24.394 "r_mbytes_per_sec": 0, 00:11:24.394 "w_mbytes_per_sec": 0 00:11:24.394 }, 00:11:24.394 "claimed": true, 00:11:24.394 "claim_type": "exclusive_write", 00:11:24.394 "zoned": false, 00:11:24.394 "supported_io_types": { 00:11:24.394 "read": true, 00:11:24.394 "write": true, 00:11:24.394 "unmap": true, 00:11:24.394 "flush": true, 00:11:24.394 "reset": true, 00:11:24.394 "nvme_admin": false, 00:11:24.394 "nvme_io": false, 00:11:24.394 "nvme_io_md": false, 00:11:24.394 "write_zeroes": true, 00:11:24.394 "zcopy": true, 00:11:24.394 "get_zone_info": false, 00:11:24.394 "zone_management": false, 00:11:24.394 "zone_append": false, 00:11:24.394 "compare": false, 00:11:24.394 "compare_and_write": false, 00:11:24.394 "abort": true, 00:11:24.394 "seek_hole": false, 00:11:24.394 "seek_data": false, 00:11:24.394 "copy": true, 00:11:24.394 "nvme_iov_md": false 00:11:24.394 }, 00:11:24.394 "memory_domains": [ 00:11:24.394 { 00:11:24.394 "dma_device_id": "system", 00:11:24.394 "dma_device_type": 1 00:11:24.394 }, 00:11:24.394 { 00:11:24.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.394 "dma_device_type": 2 00:11:24.394 } 00:11:24.394 ], 00:11:24.394 "driver_specific": {} 00:11:24.394 } 00:11:24.394 ] 00:11:24.394 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.394 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:24.395 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:24.395 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:24.395 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:24.395 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.395 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.395 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.395 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.395 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.395 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.395 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.395 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.395 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.395 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.395 10:22:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.395 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.395 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.395 10:22:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.395 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.395 "name": "Existed_Raid", 00:11:24.395 "uuid": "1299d02f-97ae-43c5-abe9-08dd982e60b8", 00:11:24.395 "strip_size_kb": 0, 00:11:24.395 "state": "online", 00:11:24.395 "raid_level": "raid1", 00:11:24.395 "superblock": true, 00:11:24.395 "num_base_bdevs": 4, 00:11:24.395 "num_base_bdevs_discovered": 4, 00:11:24.395 "num_base_bdevs_operational": 4, 00:11:24.395 "base_bdevs_list": [ 00:11:24.395 { 00:11:24.395 "name": "BaseBdev1", 00:11:24.395 "uuid": "5fa014d3-e935-486a-aa41-916976f10d14", 00:11:24.395 "is_configured": true, 00:11:24.395 "data_offset": 2048, 00:11:24.395 "data_size": 63488 00:11:24.395 }, 00:11:24.395 { 00:11:24.395 "name": "BaseBdev2", 00:11:24.395 "uuid": "10187101-d153-4d91-9373-589c66a9f39f", 00:11:24.395 "is_configured": true, 00:11:24.395 "data_offset": 2048, 00:11:24.395 "data_size": 63488 00:11:24.395 }, 00:11:24.395 { 00:11:24.395 "name": "BaseBdev3", 00:11:24.395 "uuid": "c34b4594-fb93-4dfa-89a6-b73fa0142921", 00:11:24.395 "is_configured": true, 00:11:24.395 "data_offset": 2048, 00:11:24.395 "data_size": 63488 00:11:24.395 }, 00:11:24.395 { 00:11:24.395 "name": "BaseBdev4", 00:11:24.395 "uuid": "2f92ce53-35be-4adc-96bd-a0a3620ff3aa", 00:11:24.395 "is_configured": true, 00:11:24.395 "data_offset": 2048, 00:11:24.395 "data_size": 63488 00:11:24.395 } 00:11:24.395 ] 00:11:24.395 }' 00:11:24.395 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.395 10:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.655 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:24.655 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:24.655 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:24.655 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:24.655 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:24.655 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:24.655 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:24.655 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:24.655 10:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.655 10:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.655 [2024-11-19 10:22:38.425067] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:24.915 10:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.915 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:24.915 "name": "Existed_Raid", 00:11:24.915 "aliases": [ 00:11:24.915 "1299d02f-97ae-43c5-abe9-08dd982e60b8" 00:11:24.915 ], 00:11:24.915 "product_name": "Raid Volume", 00:11:24.915 "block_size": 512, 00:11:24.915 "num_blocks": 63488, 00:11:24.915 "uuid": "1299d02f-97ae-43c5-abe9-08dd982e60b8", 00:11:24.915 "assigned_rate_limits": { 00:11:24.915 "rw_ios_per_sec": 0, 00:11:24.915 "rw_mbytes_per_sec": 0, 00:11:24.915 "r_mbytes_per_sec": 0, 00:11:24.915 "w_mbytes_per_sec": 0 00:11:24.915 }, 00:11:24.915 "claimed": false, 00:11:24.915 "zoned": false, 00:11:24.915 "supported_io_types": { 00:11:24.915 "read": true, 00:11:24.915 "write": true, 00:11:24.915 "unmap": false, 00:11:24.915 "flush": false, 00:11:24.915 "reset": true, 00:11:24.915 "nvme_admin": false, 00:11:24.915 "nvme_io": false, 00:11:24.915 "nvme_io_md": false, 00:11:24.915 "write_zeroes": true, 00:11:24.915 "zcopy": false, 00:11:24.915 "get_zone_info": false, 00:11:24.915 "zone_management": false, 00:11:24.915 "zone_append": false, 00:11:24.915 "compare": false, 00:11:24.915 "compare_and_write": false, 00:11:24.916 "abort": false, 00:11:24.916 "seek_hole": false, 00:11:24.916 "seek_data": false, 00:11:24.916 "copy": false, 00:11:24.916 "nvme_iov_md": false 00:11:24.916 }, 00:11:24.916 "memory_domains": [ 00:11:24.916 { 00:11:24.916 "dma_device_id": "system", 00:11:24.916 "dma_device_type": 1 00:11:24.916 }, 00:11:24.916 { 00:11:24.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.916 "dma_device_type": 2 00:11:24.916 }, 00:11:24.916 { 00:11:24.916 "dma_device_id": "system", 00:11:24.916 "dma_device_type": 1 00:11:24.916 }, 00:11:24.916 { 00:11:24.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.916 "dma_device_type": 2 00:11:24.916 }, 00:11:24.916 { 00:11:24.916 "dma_device_id": "system", 00:11:24.916 "dma_device_type": 1 00:11:24.916 }, 00:11:24.916 { 00:11:24.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.916 "dma_device_type": 2 00:11:24.916 }, 00:11:24.916 { 00:11:24.916 "dma_device_id": "system", 00:11:24.916 "dma_device_type": 1 00:11:24.916 }, 00:11:24.916 { 00:11:24.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.916 "dma_device_type": 2 00:11:24.916 } 00:11:24.916 ], 00:11:24.916 "driver_specific": { 00:11:24.916 "raid": { 00:11:24.916 "uuid": "1299d02f-97ae-43c5-abe9-08dd982e60b8", 00:11:24.916 "strip_size_kb": 0, 00:11:24.916 "state": "online", 00:11:24.916 "raid_level": "raid1", 00:11:24.916 "superblock": true, 00:11:24.916 "num_base_bdevs": 4, 00:11:24.916 "num_base_bdevs_discovered": 4, 00:11:24.916 "num_base_bdevs_operational": 4, 00:11:24.916 "base_bdevs_list": [ 00:11:24.916 { 00:11:24.916 "name": "BaseBdev1", 00:11:24.916 "uuid": "5fa014d3-e935-486a-aa41-916976f10d14", 00:11:24.916 "is_configured": true, 00:11:24.916 "data_offset": 2048, 00:11:24.916 "data_size": 63488 00:11:24.916 }, 00:11:24.916 { 00:11:24.916 "name": "BaseBdev2", 00:11:24.916 "uuid": "10187101-d153-4d91-9373-589c66a9f39f", 00:11:24.916 "is_configured": true, 00:11:24.916 "data_offset": 2048, 00:11:24.916 "data_size": 63488 00:11:24.916 }, 00:11:24.916 { 00:11:24.916 "name": "BaseBdev3", 00:11:24.916 "uuid": "c34b4594-fb93-4dfa-89a6-b73fa0142921", 00:11:24.916 "is_configured": true, 00:11:24.916 "data_offset": 2048, 00:11:24.916 "data_size": 63488 00:11:24.916 }, 00:11:24.916 { 00:11:24.916 "name": "BaseBdev4", 00:11:24.916 "uuid": "2f92ce53-35be-4adc-96bd-a0a3620ff3aa", 00:11:24.916 "is_configured": true, 00:11:24.916 "data_offset": 2048, 00:11:24.916 "data_size": 63488 00:11:24.916 } 00:11:24.916 ] 00:11:24.916 } 00:11:24.916 } 00:11:24.916 }' 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:24.916 BaseBdev2 00:11:24.916 BaseBdev3 00:11:24.916 BaseBdev4' 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.916 10:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.177 10:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.177 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.177 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.177 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:25.177 10:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.177 10:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.177 [2024-11-19 10:22:38.732287] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:25.177 10:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.177 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:25.177 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:25.177 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:25.177 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:25.177 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:25.177 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:25.177 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.177 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.177 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.177 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.177 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.177 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.177 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.177 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.177 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.177 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.177 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.177 10:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.177 10:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.177 10:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.177 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.177 "name": "Existed_Raid", 00:11:25.177 "uuid": "1299d02f-97ae-43c5-abe9-08dd982e60b8", 00:11:25.177 "strip_size_kb": 0, 00:11:25.177 "state": "online", 00:11:25.177 "raid_level": "raid1", 00:11:25.177 "superblock": true, 00:11:25.177 "num_base_bdevs": 4, 00:11:25.177 "num_base_bdevs_discovered": 3, 00:11:25.177 "num_base_bdevs_operational": 3, 00:11:25.177 "base_bdevs_list": [ 00:11:25.177 { 00:11:25.177 "name": null, 00:11:25.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.177 "is_configured": false, 00:11:25.177 "data_offset": 0, 00:11:25.177 "data_size": 63488 00:11:25.177 }, 00:11:25.177 { 00:11:25.177 "name": "BaseBdev2", 00:11:25.177 "uuid": "10187101-d153-4d91-9373-589c66a9f39f", 00:11:25.177 "is_configured": true, 00:11:25.177 "data_offset": 2048, 00:11:25.177 "data_size": 63488 00:11:25.177 }, 00:11:25.177 { 00:11:25.177 "name": "BaseBdev3", 00:11:25.177 "uuid": "c34b4594-fb93-4dfa-89a6-b73fa0142921", 00:11:25.177 "is_configured": true, 00:11:25.177 "data_offset": 2048, 00:11:25.177 "data_size": 63488 00:11:25.177 }, 00:11:25.177 { 00:11:25.177 "name": "BaseBdev4", 00:11:25.177 "uuid": "2f92ce53-35be-4adc-96bd-a0a3620ff3aa", 00:11:25.177 "is_configured": true, 00:11:25.177 "data_offset": 2048, 00:11:25.177 "data_size": 63488 00:11:25.177 } 00:11:25.177 ] 00:11:25.177 }' 00:11:25.177 10:22:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.177 10:22:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.746 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:25.746 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:25.746 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.746 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:25.746 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.746 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.746 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.746 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:25.746 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:25.746 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:25.746 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.746 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.746 [2024-11-19 10:22:39.335676] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:25.746 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.746 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:25.746 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:25.746 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.746 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.746 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.746 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:25.746 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.746 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:25.746 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:25.746 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:25.746 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.746 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.746 [2024-11-19 10:22:39.494319] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:26.006 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.006 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:26.006 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:26.006 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.006 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:26.006 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.006 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.006 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.006 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:26.006 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:26.006 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:26.006 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.006 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.006 [2024-11-19 10:22:39.648844] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:26.006 [2024-11-19 10:22:39.649018] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:26.006 [2024-11-19 10:22:39.746260] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.006 [2024-11-19 10:22:39.746403] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:26.006 [2024-11-19 10:22:39.746448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:26.006 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.006 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:26.007 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:26.007 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:26.007 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.007 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.007 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.007 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.267 BaseBdev2 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.267 [ 00:11:26.267 { 00:11:26.267 "name": "BaseBdev2", 00:11:26.267 "aliases": [ 00:11:26.267 "c78c3ab3-d7cc-41bc-b56a-de8d93c33285" 00:11:26.267 ], 00:11:26.267 "product_name": "Malloc disk", 00:11:26.267 "block_size": 512, 00:11:26.267 "num_blocks": 65536, 00:11:26.267 "uuid": "c78c3ab3-d7cc-41bc-b56a-de8d93c33285", 00:11:26.267 "assigned_rate_limits": { 00:11:26.267 "rw_ios_per_sec": 0, 00:11:26.267 "rw_mbytes_per_sec": 0, 00:11:26.267 "r_mbytes_per_sec": 0, 00:11:26.267 "w_mbytes_per_sec": 0 00:11:26.267 }, 00:11:26.267 "claimed": false, 00:11:26.267 "zoned": false, 00:11:26.267 "supported_io_types": { 00:11:26.267 "read": true, 00:11:26.267 "write": true, 00:11:26.267 "unmap": true, 00:11:26.267 "flush": true, 00:11:26.267 "reset": true, 00:11:26.267 "nvme_admin": false, 00:11:26.267 "nvme_io": false, 00:11:26.267 "nvme_io_md": false, 00:11:26.267 "write_zeroes": true, 00:11:26.267 "zcopy": true, 00:11:26.267 "get_zone_info": false, 00:11:26.267 "zone_management": false, 00:11:26.267 "zone_append": false, 00:11:26.267 "compare": false, 00:11:26.267 "compare_and_write": false, 00:11:26.267 "abort": true, 00:11:26.267 "seek_hole": false, 00:11:26.267 "seek_data": false, 00:11:26.267 "copy": true, 00:11:26.267 "nvme_iov_md": false 00:11:26.267 }, 00:11:26.267 "memory_domains": [ 00:11:26.267 { 00:11:26.267 "dma_device_id": "system", 00:11:26.267 "dma_device_type": 1 00:11:26.267 }, 00:11:26.267 { 00:11:26.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.267 "dma_device_type": 2 00:11:26.267 } 00:11:26.267 ], 00:11:26.267 "driver_specific": {} 00:11:26.267 } 00:11:26.267 ] 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.267 BaseBdev3 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.267 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:26.268 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.268 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.268 [ 00:11:26.268 { 00:11:26.268 "name": "BaseBdev3", 00:11:26.268 "aliases": [ 00:11:26.268 "682e16a1-5777-44c0-9807-b95e8243de4f" 00:11:26.268 ], 00:11:26.268 "product_name": "Malloc disk", 00:11:26.268 "block_size": 512, 00:11:26.268 "num_blocks": 65536, 00:11:26.268 "uuid": "682e16a1-5777-44c0-9807-b95e8243de4f", 00:11:26.268 "assigned_rate_limits": { 00:11:26.268 "rw_ios_per_sec": 0, 00:11:26.268 "rw_mbytes_per_sec": 0, 00:11:26.268 "r_mbytes_per_sec": 0, 00:11:26.268 "w_mbytes_per_sec": 0 00:11:26.268 }, 00:11:26.268 "claimed": false, 00:11:26.268 "zoned": false, 00:11:26.268 "supported_io_types": { 00:11:26.268 "read": true, 00:11:26.268 "write": true, 00:11:26.268 "unmap": true, 00:11:26.268 "flush": true, 00:11:26.268 "reset": true, 00:11:26.268 "nvme_admin": false, 00:11:26.268 "nvme_io": false, 00:11:26.268 "nvme_io_md": false, 00:11:26.268 "write_zeroes": true, 00:11:26.268 "zcopy": true, 00:11:26.268 "get_zone_info": false, 00:11:26.268 "zone_management": false, 00:11:26.268 "zone_append": false, 00:11:26.268 "compare": false, 00:11:26.268 "compare_and_write": false, 00:11:26.268 "abort": true, 00:11:26.268 "seek_hole": false, 00:11:26.268 "seek_data": false, 00:11:26.268 "copy": true, 00:11:26.268 "nvme_iov_md": false 00:11:26.268 }, 00:11:26.268 "memory_domains": [ 00:11:26.268 { 00:11:26.268 "dma_device_id": "system", 00:11:26.268 "dma_device_type": 1 00:11:26.268 }, 00:11:26.268 { 00:11:26.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.268 "dma_device_type": 2 00:11:26.268 } 00:11:26.268 ], 00:11:26.268 "driver_specific": {} 00:11:26.268 } 00:11:26.268 ] 00:11:26.268 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.268 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:26.268 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:26.268 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.268 10:22:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:26.268 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.268 10:22:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.268 BaseBdev4 00:11:26.268 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.268 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:26.268 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:26.268 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.268 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:26.268 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.268 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.268 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.268 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.268 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.268 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.268 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:26.268 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.268 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.268 [ 00:11:26.268 { 00:11:26.268 "name": "BaseBdev4", 00:11:26.268 "aliases": [ 00:11:26.268 "5be8a6ed-d437-4c50-9f06-41187165135e" 00:11:26.268 ], 00:11:26.268 "product_name": "Malloc disk", 00:11:26.268 "block_size": 512, 00:11:26.268 "num_blocks": 65536, 00:11:26.268 "uuid": "5be8a6ed-d437-4c50-9f06-41187165135e", 00:11:26.268 "assigned_rate_limits": { 00:11:26.268 "rw_ios_per_sec": 0, 00:11:26.268 "rw_mbytes_per_sec": 0, 00:11:26.268 "r_mbytes_per_sec": 0, 00:11:26.268 "w_mbytes_per_sec": 0 00:11:26.268 }, 00:11:26.268 "claimed": false, 00:11:26.268 "zoned": false, 00:11:26.268 "supported_io_types": { 00:11:26.268 "read": true, 00:11:26.268 "write": true, 00:11:26.268 "unmap": true, 00:11:26.268 "flush": true, 00:11:26.268 "reset": true, 00:11:26.268 "nvme_admin": false, 00:11:26.268 "nvme_io": false, 00:11:26.268 "nvme_io_md": false, 00:11:26.268 "write_zeroes": true, 00:11:26.268 "zcopy": true, 00:11:26.268 "get_zone_info": false, 00:11:26.268 "zone_management": false, 00:11:26.268 "zone_append": false, 00:11:26.268 "compare": false, 00:11:26.268 "compare_and_write": false, 00:11:26.268 "abort": true, 00:11:26.268 "seek_hole": false, 00:11:26.268 "seek_data": false, 00:11:26.268 "copy": true, 00:11:26.268 "nvme_iov_md": false 00:11:26.268 }, 00:11:26.268 "memory_domains": [ 00:11:26.268 { 00:11:26.268 "dma_device_id": "system", 00:11:26.268 "dma_device_type": 1 00:11:26.268 }, 00:11:26.268 { 00:11:26.268 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.268 "dma_device_type": 2 00:11:26.268 } 00:11:26.268 ], 00:11:26.268 "driver_specific": {} 00:11:26.268 } 00:11:26.268 ] 00:11:26.528 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.528 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:26.528 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:26.528 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.528 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:26.528 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.528 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.528 [2024-11-19 10:22:40.053363] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:26.528 [2024-11-19 10:22:40.053464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:26.528 [2024-11-19 10:22:40.053506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:26.528 [2024-11-19 10:22:40.055351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:26.528 [2024-11-19 10:22:40.055443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:26.528 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.528 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:26.528 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.528 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.528 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.528 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.528 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.528 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.528 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.528 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.528 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.528 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.528 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.529 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.529 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.529 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.529 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.529 "name": "Existed_Raid", 00:11:26.529 "uuid": "bba308de-e3b3-4909-8960-8938f87af2b1", 00:11:26.529 "strip_size_kb": 0, 00:11:26.529 "state": "configuring", 00:11:26.529 "raid_level": "raid1", 00:11:26.529 "superblock": true, 00:11:26.529 "num_base_bdevs": 4, 00:11:26.529 "num_base_bdevs_discovered": 3, 00:11:26.529 "num_base_bdevs_operational": 4, 00:11:26.529 "base_bdevs_list": [ 00:11:26.529 { 00:11:26.529 "name": "BaseBdev1", 00:11:26.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.529 "is_configured": false, 00:11:26.529 "data_offset": 0, 00:11:26.529 "data_size": 0 00:11:26.529 }, 00:11:26.529 { 00:11:26.529 "name": "BaseBdev2", 00:11:26.529 "uuid": "c78c3ab3-d7cc-41bc-b56a-de8d93c33285", 00:11:26.529 "is_configured": true, 00:11:26.529 "data_offset": 2048, 00:11:26.529 "data_size": 63488 00:11:26.529 }, 00:11:26.529 { 00:11:26.529 "name": "BaseBdev3", 00:11:26.529 "uuid": "682e16a1-5777-44c0-9807-b95e8243de4f", 00:11:26.529 "is_configured": true, 00:11:26.529 "data_offset": 2048, 00:11:26.529 "data_size": 63488 00:11:26.529 }, 00:11:26.529 { 00:11:26.529 "name": "BaseBdev4", 00:11:26.529 "uuid": "5be8a6ed-d437-4c50-9f06-41187165135e", 00:11:26.529 "is_configured": true, 00:11:26.529 "data_offset": 2048, 00:11:26.529 "data_size": 63488 00:11:26.529 } 00:11:26.529 ] 00:11:26.529 }' 00:11:26.529 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.529 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.789 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:26.789 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.789 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.789 [2024-11-19 10:22:40.480669] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:26.789 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.789 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:26.789 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.789 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.789 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.789 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.789 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.789 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.789 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.789 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.789 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.789 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.789 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.789 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.789 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.789 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.789 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.789 "name": "Existed_Raid", 00:11:26.789 "uuid": "bba308de-e3b3-4909-8960-8938f87af2b1", 00:11:26.789 "strip_size_kb": 0, 00:11:26.789 "state": "configuring", 00:11:26.789 "raid_level": "raid1", 00:11:26.789 "superblock": true, 00:11:26.789 "num_base_bdevs": 4, 00:11:26.789 "num_base_bdevs_discovered": 2, 00:11:26.789 "num_base_bdevs_operational": 4, 00:11:26.789 "base_bdevs_list": [ 00:11:26.789 { 00:11:26.789 "name": "BaseBdev1", 00:11:26.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.789 "is_configured": false, 00:11:26.789 "data_offset": 0, 00:11:26.789 "data_size": 0 00:11:26.789 }, 00:11:26.789 { 00:11:26.789 "name": null, 00:11:26.789 "uuid": "c78c3ab3-d7cc-41bc-b56a-de8d93c33285", 00:11:26.789 "is_configured": false, 00:11:26.789 "data_offset": 0, 00:11:26.789 "data_size": 63488 00:11:26.789 }, 00:11:26.789 { 00:11:26.789 "name": "BaseBdev3", 00:11:26.789 "uuid": "682e16a1-5777-44c0-9807-b95e8243de4f", 00:11:26.789 "is_configured": true, 00:11:26.789 "data_offset": 2048, 00:11:26.789 "data_size": 63488 00:11:26.789 }, 00:11:26.789 { 00:11:26.789 "name": "BaseBdev4", 00:11:26.789 "uuid": "5be8a6ed-d437-4c50-9f06-41187165135e", 00:11:26.789 "is_configured": true, 00:11:26.789 "data_offset": 2048, 00:11:26.789 "data_size": 63488 00:11:26.789 } 00:11:26.789 ] 00:11:26.789 }' 00:11:26.789 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.789 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.359 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.359 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.359 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.359 10:22:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:27.359 10:22:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.359 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:27.359 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:27.359 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.359 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.359 [2024-11-19 10:22:41.053290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.359 BaseBdev1 00:11:27.359 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.359 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:27.359 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:27.359 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.359 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:27.359 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.359 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.359 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.359 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.359 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.359 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.359 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:27.359 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.359 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.359 [ 00:11:27.359 { 00:11:27.359 "name": "BaseBdev1", 00:11:27.359 "aliases": [ 00:11:27.359 "d52a314f-2ea8-471a-b9f9-2e080cc1531f" 00:11:27.359 ], 00:11:27.360 "product_name": "Malloc disk", 00:11:27.360 "block_size": 512, 00:11:27.360 "num_blocks": 65536, 00:11:27.360 "uuid": "d52a314f-2ea8-471a-b9f9-2e080cc1531f", 00:11:27.360 "assigned_rate_limits": { 00:11:27.360 "rw_ios_per_sec": 0, 00:11:27.360 "rw_mbytes_per_sec": 0, 00:11:27.360 "r_mbytes_per_sec": 0, 00:11:27.360 "w_mbytes_per_sec": 0 00:11:27.360 }, 00:11:27.360 "claimed": true, 00:11:27.360 "claim_type": "exclusive_write", 00:11:27.360 "zoned": false, 00:11:27.360 "supported_io_types": { 00:11:27.360 "read": true, 00:11:27.360 "write": true, 00:11:27.360 "unmap": true, 00:11:27.360 "flush": true, 00:11:27.360 "reset": true, 00:11:27.360 "nvme_admin": false, 00:11:27.360 "nvme_io": false, 00:11:27.360 "nvme_io_md": false, 00:11:27.360 "write_zeroes": true, 00:11:27.360 "zcopy": true, 00:11:27.360 "get_zone_info": false, 00:11:27.360 "zone_management": false, 00:11:27.360 "zone_append": false, 00:11:27.360 "compare": false, 00:11:27.360 "compare_and_write": false, 00:11:27.360 "abort": true, 00:11:27.360 "seek_hole": false, 00:11:27.360 "seek_data": false, 00:11:27.360 "copy": true, 00:11:27.360 "nvme_iov_md": false 00:11:27.360 }, 00:11:27.360 "memory_domains": [ 00:11:27.360 { 00:11:27.360 "dma_device_id": "system", 00:11:27.360 "dma_device_type": 1 00:11:27.360 }, 00:11:27.360 { 00:11:27.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.360 "dma_device_type": 2 00:11:27.360 } 00:11:27.360 ], 00:11:27.360 "driver_specific": {} 00:11:27.360 } 00:11:27.360 ] 00:11:27.360 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.360 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:27.360 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:27.360 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.360 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.360 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.360 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.360 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.360 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.360 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.360 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.360 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.360 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.360 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.360 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.360 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.360 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.620 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.620 "name": "Existed_Raid", 00:11:27.620 "uuid": "bba308de-e3b3-4909-8960-8938f87af2b1", 00:11:27.620 "strip_size_kb": 0, 00:11:27.620 "state": "configuring", 00:11:27.620 "raid_level": "raid1", 00:11:27.620 "superblock": true, 00:11:27.620 "num_base_bdevs": 4, 00:11:27.620 "num_base_bdevs_discovered": 3, 00:11:27.620 "num_base_bdevs_operational": 4, 00:11:27.620 "base_bdevs_list": [ 00:11:27.620 { 00:11:27.620 "name": "BaseBdev1", 00:11:27.620 "uuid": "d52a314f-2ea8-471a-b9f9-2e080cc1531f", 00:11:27.620 "is_configured": true, 00:11:27.620 "data_offset": 2048, 00:11:27.620 "data_size": 63488 00:11:27.620 }, 00:11:27.620 { 00:11:27.620 "name": null, 00:11:27.620 "uuid": "c78c3ab3-d7cc-41bc-b56a-de8d93c33285", 00:11:27.620 "is_configured": false, 00:11:27.620 "data_offset": 0, 00:11:27.620 "data_size": 63488 00:11:27.620 }, 00:11:27.620 { 00:11:27.620 "name": "BaseBdev3", 00:11:27.620 "uuid": "682e16a1-5777-44c0-9807-b95e8243de4f", 00:11:27.620 "is_configured": true, 00:11:27.620 "data_offset": 2048, 00:11:27.620 "data_size": 63488 00:11:27.620 }, 00:11:27.620 { 00:11:27.620 "name": "BaseBdev4", 00:11:27.620 "uuid": "5be8a6ed-d437-4c50-9f06-41187165135e", 00:11:27.620 "is_configured": true, 00:11:27.620 "data_offset": 2048, 00:11:27.620 "data_size": 63488 00:11:27.620 } 00:11:27.620 ] 00:11:27.620 }' 00:11:27.620 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.620 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.880 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:27.880 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.880 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.880 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.880 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.880 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:27.880 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:27.880 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.880 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.880 [2024-11-19 10:22:41.596472] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:27.880 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.880 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:27.880 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.880 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.880 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.880 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.880 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.880 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.880 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.880 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.880 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.880 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.880 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.880 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.880 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.880 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.880 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.880 "name": "Existed_Raid", 00:11:27.880 "uuid": "bba308de-e3b3-4909-8960-8938f87af2b1", 00:11:27.880 "strip_size_kb": 0, 00:11:27.880 "state": "configuring", 00:11:27.880 "raid_level": "raid1", 00:11:27.880 "superblock": true, 00:11:27.880 "num_base_bdevs": 4, 00:11:27.880 "num_base_bdevs_discovered": 2, 00:11:27.880 "num_base_bdevs_operational": 4, 00:11:27.880 "base_bdevs_list": [ 00:11:27.881 { 00:11:27.881 "name": "BaseBdev1", 00:11:27.881 "uuid": "d52a314f-2ea8-471a-b9f9-2e080cc1531f", 00:11:27.881 "is_configured": true, 00:11:27.881 "data_offset": 2048, 00:11:27.881 "data_size": 63488 00:11:27.881 }, 00:11:27.881 { 00:11:27.881 "name": null, 00:11:27.881 "uuid": "c78c3ab3-d7cc-41bc-b56a-de8d93c33285", 00:11:27.881 "is_configured": false, 00:11:27.881 "data_offset": 0, 00:11:27.881 "data_size": 63488 00:11:27.881 }, 00:11:27.881 { 00:11:27.881 "name": null, 00:11:27.881 "uuid": "682e16a1-5777-44c0-9807-b95e8243de4f", 00:11:27.881 "is_configured": false, 00:11:27.881 "data_offset": 0, 00:11:27.881 "data_size": 63488 00:11:27.881 }, 00:11:27.881 { 00:11:27.881 "name": "BaseBdev4", 00:11:27.881 "uuid": "5be8a6ed-d437-4c50-9f06-41187165135e", 00:11:27.881 "is_configured": true, 00:11:27.881 "data_offset": 2048, 00:11:27.881 "data_size": 63488 00:11:27.881 } 00:11:27.881 ] 00:11:27.881 }' 00:11:27.881 10:22:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.881 10:22:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.451 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:28.451 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.451 10:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.451 10:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.451 10:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.451 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:28.451 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:28.451 10:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.451 10:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.451 [2024-11-19 10:22:42.055681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:28.451 10:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.451 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:28.451 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.451 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.451 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.451 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.451 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.451 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.451 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.451 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.451 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.451 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.451 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.451 10:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.451 10:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.451 10:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.451 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.451 "name": "Existed_Raid", 00:11:28.451 "uuid": "bba308de-e3b3-4909-8960-8938f87af2b1", 00:11:28.451 "strip_size_kb": 0, 00:11:28.451 "state": "configuring", 00:11:28.451 "raid_level": "raid1", 00:11:28.451 "superblock": true, 00:11:28.451 "num_base_bdevs": 4, 00:11:28.451 "num_base_bdevs_discovered": 3, 00:11:28.451 "num_base_bdevs_operational": 4, 00:11:28.451 "base_bdevs_list": [ 00:11:28.451 { 00:11:28.451 "name": "BaseBdev1", 00:11:28.451 "uuid": "d52a314f-2ea8-471a-b9f9-2e080cc1531f", 00:11:28.451 "is_configured": true, 00:11:28.451 "data_offset": 2048, 00:11:28.451 "data_size": 63488 00:11:28.451 }, 00:11:28.451 { 00:11:28.451 "name": null, 00:11:28.451 "uuid": "c78c3ab3-d7cc-41bc-b56a-de8d93c33285", 00:11:28.451 "is_configured": false, 00:11:28.451 "data_offset": 0, 00:11:28.451 "data_size": 63488 00:11:28.451 }, 00:11:28.451 { 00:11:28.452 "name": "BaseBdev3", 00:11:28.452 "uuid": "682e16a1-5777-44c0-9807-b95e8243de4f", 00:11:28.452 "is_configured": true, 00:11:28.452 "data_offset": 2048, 00:11:28.452 "data_size": 63488 00:11:28.452 }, 00:11:28.452 { 00:11:28.452 "name": "BaseBdev4", 00:11:28.452 "uuid": "5be8a6ed-d437-4c50-9f06-41187165135e", 00:11:28.452 "is_configured": true, 00:11:28.452 "data_offset": 2048, 00:11:28.452 "data_size": 63488 00:11:28.452 } 00:11:28.452 ] 00:11:28.452 }' 00:11:28.452 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.452 10:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.022 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.022 10:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.022 10:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.023 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:29.023 10:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.023 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:29.023 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:29.023 10:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.023 10:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.023 [2024-11-19 10:22:42.579354] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:29.023 10:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.023 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:29.023 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.023 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.023 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.023 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.023 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.023 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.023 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.023 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.023 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.023 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.023 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.023 10:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.023 10:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.023 10:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.023 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.023 "name": "Existed_Raid", 00:11:29.023 "uuid": "bba308de-e3b3-4909-8960-8938f87af2b1", 00:11:29.023 "strip_size_kb": 0, 00:11:29.023 "state": "configuring", 00:11:29.023 "raid_level": "raid1", 00:11:29.023 "superblock": true, 00:11:29.023 "num_base_bdevs": 4, 00:11:29.023 "num_base_bdevs_discovered": 2, 00:11:29.023 "num_base_bdevs_operational": 4, 00:11:29.023 "base_bdevs_list": [ 00:11:29.023 { 00:11:29.023 "name": null, 00:11:29.023 "uuid": "d52a314f-2ea8-471a-b9f9-2e080cc1531f", 00:11:29.023 "is_configured": false, 00:11:29.023 "data_offset": 0, 00:11:29.023 "data_size": 63488 00:11:29.023 }, 00:11:29.023 { 00:11:29.023 "name": null, 00:11:29.023 "uuid": "c78c3ab3-d7cc-41bc-b56a-de8d93c33285", 00:11:29.023 "is_configured": false, 00:11:29.023 "data_offset": 0, 00:11:29.023 "data_size": 63488 00:11:29.023 }, 00:11:29.023 { 00:11:29.023 "name": "BaseBdev3", 00:11:29.023 "uuid": "682e16a1-5777-44c0-9807-b95e8243de4f", 00:11:29.023 "is_configured": true, 00:11:29.023 "data_offset": 2048, 00:11:29.023 "data_size": 63488 00:11:29.023 }, 00:11:29.023 { 00:11:29.023 "name": "BaseBdev4", 00:11:29.023 "uuid": "5be8a6ed-d437-4c50-9f06-41187165135e", 00:11:29.023 "is_configured": true, 00:11:29.023 "data_offset": 2048, 00:11:29.023 "data_size": 63488 00:11:29.023 } 00:11:29.023 ] 00:11:29.023 }' 00:11:29.023 10:22:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.023 10:22:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.593 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:29.593 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.593 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.593 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.593 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.593 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:29.593 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:29.593 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.593 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.593 [2024-11-19 10:22:43.179371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:29.593 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.593 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:29.593 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.593 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.593 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.593 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.593 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.594 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.594 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.594 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.594 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.594 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.594 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.594 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.594 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.594 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.594 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.594 "name": "Existed_Raid", 00:11:29.594 "uuid": "bba308de-e3b3-4909-8960-8938f87af2b1", 00:11:29.594 "strip_size_kb": 0, 00:11:29.594 "state": "configuring", 00:11:29.594 "raid_level": "raid1", 00:11:29.594 "superblock": true, 00:11:29.594 "num_base_bdevs": 4, 00:11:29.594 "num_base_bdevs_discovered": 3, 00:11:29.594 "num_base_bdevs_operational": 4, 00:11:29.594 "base_bdevs_list": [ 00:11:29.594 { 00:11:29.594 "name": null, 00:11:29.594 "uuid": "d52a314f-2ea8-471a-b9f9-2e080cc1531f", 00:11:29.594 "is_configured": false, 00:11:29.594 "data_offset": 0, 00:11:29.594 "data_size": 63488 00:11:29.594 }, 00:11:29.594 { 00:11:29.594 "name": "BaseBdev2", 00:11:29.594 "uuid": "c78c3ab3-d7cc-41bc-b56a-de8d93c33285", 00:11:29.594 "is_configured": true, 00:11:29.594 "data_offset": 2048, 00:11:29.594 "data_size": 63488 00:11:29.594 }, 00:11:29.594 { 00:11:29.594 "name": "BaseBdev3", 00:11:29.594 "uuid": "682e16a1-5777-44c0-9807-b95e8243de4f", 00:11:29.594 "is_configured": true, 00:11:29.594 "data_offset": 2048, 00:11:29.594 "data_size": 63488 00:11:29.594 }, 00:11:29.594 { 00:11:29.594 "name": "BaseBdev4", 00:11:29.594 "uuid": "5be8a6ed-d437-4c50-9f06-41187165135e", 00:11:29.594 "is_configured": true, 00:11:29.594 "data_offset": 2048, 00:11:29.594 "data_size": 63488 00:11:29.594 } 00:11:29.594 ] 00:11:29.594 }' 00:11:29.594 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.594 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.853 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:29.853 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.853 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.853 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.853 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d52a314f-2ea8-471a-b9f9-2e080cc1531f 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.114 [2024-11-19 10:22:43.733208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:30.114 [2024-11-19 10:22:43.733558] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:30.114 [2024-11-19 10:22:43.733618] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:30.114 [2024-11-19 10:22:43.733932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:30.114 NewBaseBdev 00:11:30.114 [2024-11-19 10:22:43.734169] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:30.114 [2024-11-19 10:22:43.734183] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:30.114 [2024-11-19 10:22:43.734336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.114 [ 00:11:30.114 { 00:11:30.114 "name": "NewBaseBdev", 00:11:30.114 "aliases": [ 00:11:30.114 "d52a314f-2ea8-471a-b9f9-2e080cc1531f" 00:11:30.114 ], 00:11:30.114 "product_name": "Malloc disk", 00:11:30.114 "block_size": 512, 00:11:30.114 "num_blocks": 65536, 00:11:30.114 "uuid": "d52a314f-2ea8-471a-b9f9-2e080cc1531f", 00:11:30.114 "assigned_rate_limits": { 00:11:30.114 "rw_ios_per_sec": 0, 00:11:30.114 "rw_mbytes_per_sec": 0, 00:11:30.114 "r_mbytes_per_sec": 0, 00:11:30.114 "w_mbytes_per_sec": 0 00:11:30.114 }, 00:11:30.114 "claimed": true, 00:11:30.114 "claim_type": "exclusive_write", 00:11:30.114 "zoned": false, 00:11:30.114 "supported_io_types": { 00:11:30.114 "read": true, 00:11:30.114 "write": true, 00:11:30.114 "unmap": true, 00:11:30.114 "flush": true, 00:11:30.114 "reset": true, 00:11:30.114 "nvme_admin": false, 00:11:30.114 "nvme_io": false, 00:11:30.114 "nvme_io_md": false, 00:11:30.114 "write_zeroes": true, 00:11:30.114 "zcopy": true, 00:11:30.114 "get_zone_info": false, 00:11:30.114 "zone_management": false, 00:11:30.114 "zone_append": false, 00:11:30.114 "compare": false, 00:11:30.114 "compare_and_write": false, 00:11:30.114 "abort": true, 00:11:30.114 "seek_hole": false, 00:11:30.114 "seek_data": false, 00:11:30.114 "copy": true, 00:11:30.114 "nvme_iov_md": false 00:11:30.114 }, 00:11:30.114 "memory_domains": [ 00:11:30.114 { 00:11:30.114 "dma_device_id": "system", 00:11:30.114 "dma_device_type": 1 00:11:30.114 }, 00:11:30.114 { 00:11:30.114 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.114 "dma_device_type": 2 00:11:30.114 } 00:11:30.114 ], 00:11:30.114 "driver_specific": {} 00:11:30.114 } 00:11:30.114 ] 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.114 "name": "Existed_Raid", 00:11:30.114 "uuid": "bba308de-e3b3-4909-8960-8938f87af2b1", 00:11:30.114 "strip_size_kb": 0, 00:11:30.114 "state": "online", 00:11:30.114 "raid_level": "raid1", 00:11:30.114 "superblock": true, 00:11:30.114 "num_base_bdevs": 4, 00:11:30.114 "num_base_bdevs_discovered": 4, 00:11:30.114 "num_base_bdevs_operational": 4, 00:11:30.114 "base_bdevs_list": [ 00:11:30.114 { 00:11:30.114 "name": "NewBaseBdev", 00:11:30.114 "uuid": "d52a314f-2ea8-471a-b9f9-2e080cc1531f", 00:11:30.114 "is_configured": true, 00:11:30.114 "data_offset": 2048, 00:11:30.114 "data_size": 63488 00:11:30.114 }, 00:11:30.114 { 00:11:30.114 "name": "BaseBdev2", 00:11:30.114 "uuid": "c78c3ab3-d7cc-41bc-b56a-de8d93c33285", 00:11:30.114 "is_configured": true, 00:11:30.114 "data_offset": 2048, 00:11:30.114 "data_size": 63488 00:11:30.114 }, 00:11:30.114 { 00:11:30.114 "name": "BaseBdev3", 00:11:30.114 "uuid": "682e16a1-5777-44c0-9807-b95e8243de4f", 00:11:30.114 "is_configured": true, 00:11:30.114 "data_offset": 2048, 00:11:30.114 "data_size": 63488 00:11:30.114 }, 00:11:30.114 { 00:11:30.114 "name": "BaseBdev4", 00:11:30.114 "uuid": "5be8a6ed-d437-4c50-9f06-41187165135e", 00:11:30.114 "is_configured": true, 00:11:30.114 "data_offset": 2048, 00:11:30.114 "data_size": 63488 00:11:30.114 } 00:11:30.114 ] 00:11:30.114 }' 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.114 10:22:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.374 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:30.374 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:30.374 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:30.374 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:30.375 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:30.375 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:30.375 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:30.375 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.375 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.375 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:30.375 [2024-11-19 10:22:44.148878] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:30.634 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.634 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:30.634 "name": "Existed_Raid", 00:11:30.634 "aliases": [ 00:11:30.634 "bba308de-e3b3-4909-8960-8938f87af2b1" 00:11:30.634 ], 00:11:30.634 "product_name": "Raid Volume", 00:11:30.634 "block_size": 512, 00:11:30.634 "num_blocks": 63488, 00:11:30.634 "uuid": "bba308de-e3b3-4909-8960-8938f87af2b1", 00:11:30.634 "assigned_rate_limits": { 00:11:30.634 "rw_ios_per_sec": 0, 00:11:30.634 "rw_mbytes_per_sec": 0, 00:11:30.634 "r_mbytes_per_sec": 0, 00:11:30.634 "w_mbytes_per_sec": 0 00:11:30.634 }, 00:11:30.634 "claimed": false, 00:11:30.634 "zoned": false, 00:11:30.634 "supported_io_types": { 00:11:30.634 "read": true, 00:11:30.634 "write": true, 00:11:30.634 "unmap": false, 00:11:30.634 "flush": false, 00:11:30.634 "reset": true, 00:11:30.634 "nvme_admin": false, 00:11:30.634 "nvme_io": false, 00:11:30.634 "nvme_io_md": false, 00:11:30.634 "write_zeroes": true, 00:11:30.634 "zcopy": false, 00:11:30.635 "get_zone_info": false, 00:11:30.635 "zone_management": false, 00:11:30.635 "zone_append": false, 00:11:30.635 "compare": false, 00:11:30.635 "compare_and_write": false, 00:11:30.635 "abort": false, 00:11:30.635 "seek_hole": false, 00:11:30.635 "seek_data": false, 00:11:30.635 "copy": false, 00:11:30.635 "nvme_iov_md": false 00:11:30.635 }, 00:11:30.635 "memory_domains": [ 00:11:30.635 { 00:11:30.635 "dma_device_id": "system", 00:11:30.635 "dma_device_type": 1 00:11:30.635 }, 00:11:30.635 { 00:11:30.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.635 "dma_device_type": 2 00:11:30.635 }, 00:11:30.635 { 00:11:30.635 "dma_device_id": "system", 00:11:30.635 "dma_device_type": 1 00:11:30.635 }, 00:11:30.635 { 00:11:30.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.635 "dma_device_type": 2 00:11:30.635 }, 00:11:30.635 { 00:11:30.635 "dma_device_id": "system", 00:11:30.635 "dma_device_type": 1 00:11:30.635 }, 00:11:30.635 { 00:11:30.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.635 "dma_device_type": 2 00:11:30.635 }, 00:11:30.635 { 00:11:30.635 "dma_device_id": "system", 00:11:30.635 "dma_device_type": 1 00:11:30.635 }, 00:11:30.635 { 00:11:30.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.635 "dma_device_type": 2 00:11:30.635 } 00:11:30.635 ], 00:11:30.635 "driver_specific": { 00:11:30.635 "raid": { 00:11:30.635 "uuid": "bba308de-e3b3-4909-8960-8938f87af2b1", 00:11:30.635 "strip_size_kb": 0, 00:11:30.635 "state": "online", 00:11:30.635 "raid_level": "raid1", 00:11:30.635 "superblock": true, 00:11:30.635 "num_base_bdevs": 4, 00:11:30.635 "num_base_bdevs_discovered": 4, 00:11:30.635 "num_base_bdevs_operational": 4, 00:11:30.635 "base_bdevs_list": [ 00:11:30.635 { 00:11:30.635 "name": "NewBaseBdev", 00:11:30.635 "uuid": "d52a314f-2ea8-471a-b9f9-2e080cc1531f", 00:11:30.635 "is_configured": true, 00:11:30.635 "data_offset": 2048, 00:11:30.635 "data_size": 63488 00:11:30.635 }, 00:11:30.635 { 00:11:30.635 "name": "BaseBdev2", 00:11:30.635 "uuid": "c78c3ab3-d7cc-41bc-b56a-de8d93c33285", 00:11:30.635 "is_configured": true, 00:11:30.635 "data_offset": 2048, 00:11:30.635 "data_size": 63488 00:11:30.635 }, 00:11:30.635 { 00:11:30.635 "name": "BaseBdev3", 00:11:30.635 "uuid": "682e16a1-5777-44c0-9807-b95e8243de4f", 00:11:30.635 "is_configured": true, 00:11:30.635 "data_offset": 2048, 00:11:30.635 "data_size": 63488 00:11:30.635 }, 00:11:30.635 { 00:11:30.635 "name": "BaseBdev4", 00:11:30.635 "uuid": "5be8a6ed-d437-4c50-9f06-41187165135e", 00:11:30.635 "is_configured": true, 00:11:30.635 "data_offset": 2048, 00:11:30.635 "data_size": 63488 00:11:30.635 } 00:11:30.635 ] 00:11:30.635 } 00:11:30.635 } 00:11:30.635 }' 00:11:30.635 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:30.635 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:30.635 BaseBdev2 00:11:30.635 BaseBdev3 00:11:30.635 BaseBdev4' 00:11:30.635 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.635 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:30.635 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.635 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:30.635 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.635 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.635 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.635 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.635 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.635 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.635 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.635 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.635 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:30.635 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.635 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.635 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.635 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.635 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.635 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.635 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.635 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:30.635 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.635 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.895 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.895 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.895 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.895 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.895 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:30.895 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.895 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.895 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.895 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.895 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.895 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.895 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:30.895 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.895 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.895 [2024-11-19 10:22:44.491979] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:30.895 [2024-11-19 10:22:44.492024] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:30.895 [2024-11-19 10:22:44.492115] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:30.895 [2024-11-19 10:22:44.492441] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:30.895 [2024-11-19 10:22:44.492456] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:30.895 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.895 10:22:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73608 00:11:30.895 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73608 ']' 00:11:30.895 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73608 00:11:30.895 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:30.895 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:30.895 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73608 00:11:30.895 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:30.895 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:30.895 killing process with pid 73608 00:11:30.895 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73608' 00:11:30.895 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73608 00:11:30.895 10:22:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73608 00:11:30.895 [2024-11-19 10:22:44.526036] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:31.466 [2024-11-19 10:22:44.940806] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:32.405 10:22:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:32.405 00:11:32.405 real 0m11.460s 00:11:32.405 user 0m18.170s 00:11:32.405 sys 0m1.988s 00:11:32.405 10:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.405 10:22:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.405 ************************************ 00:11:32.405 END TEST raid_state_function_test_sb 00:11:32.405 ************************************ 00:11:32.665 10:22:46 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:32.665 10:22:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:32.665 10:22:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.665 10:22:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:32.665 ************************************ 00:11:32.665 START TEST raid_superblock_test 00:11:32.665 ************************************ 00:11:32.665 10:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:11:32.665 10:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:32.665 10:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:32.665 10:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:32.665 10:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:32.665 10:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:32.665 10:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:32.665 10:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:32.665 10:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:32.665 10:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:32.665 10:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:32.666 10:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:32.666 10:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:32.666 10:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:32.666 10:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:32.666 10:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:32.666 10:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74278 00:11:32.666 10:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:32.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.666 10:22:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74278 00:11:32.666 10:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74278 ']' 00:11:32.666 10:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.666 10:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.666 10:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.666 10:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.666 10:22:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.666 [2024-11-19 10:22:46.290383] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:11:32.666 [2024-11-19 10:22:46.290594] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74278 ] 00:11:32.925 [2024-11-19 10:22:46.448906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.925 [2024-11-19 10:22:46.568153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.185 [2024-11-19 10:22:46.780799] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.185 [2024-11-19 10:22:46.780954] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.445 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.445 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:33.445 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:33.445 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:33.445 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:33.445 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:33.445 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:33.445 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:33.445 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:33.445 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:33.445 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:33.445 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.445 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.445 malloc1 00:11:33.445 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.445 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:33.445 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.445 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.445 [2024-11-19 10:22:47.211945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:33.445 [2024-11-19 10:22:47.212028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.445 [2024-11-19 10:22:47.212071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:33.445 [2024-11-19 10:22:47.212081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.445 [2024-11-19 10:22:47.214310] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.445 [2024-11-19 10:22:47.214346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:33.445 pt1 00:11:33.445 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.445 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:33.445 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:33.445 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:33.445 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:33.445 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:33.445 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:33.445 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:33.445 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:33.445 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:33.445 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.445 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.706 malloc2 00:11:33.706 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.706 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:33.706 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.706 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.706 [2024-11-19 10:22:47.272603] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:33.706 [2024-11-19 10:22:47.272720] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.706 [2024-11-19 10:22:47.272785] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:33.706 [2024-11-19 10:22:47.272828] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.706 [2024-11-19 10:22:47.275128] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.706 [2024-11-19 10:22:47.275219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:33.706 pt2 00:11:33.706 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.706 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:33.706 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:33.706 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:33.706 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:33.706 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:33.706 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:33.706 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:33.706 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:33.706 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:33.706 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.706 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.706 malloc3 00:11:33.706 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.706 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:33.706 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.706 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.706 [2024-11-19 10:22:47.343744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:33.706 [2024-11-19 10:22:47.343860] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.706 [2024-11-19 10:22:47.343908] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:33.706 [2024-11-19 10:22:47.343951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.706 [2024-11-19 10:22:47.346283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.706 [2024-11-19 10:22:47.346369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:33.707 pt3 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.707 malloc4 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.707 [2024-11-19 10:22:47.406401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:33.707 [2024-11-19 10:22:47.406500] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.707 [2024-11-19 10:22:47.406564] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:33.707 [2024-11-19 10:22:47.406598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.707 [2024-11-19 10:22:47.408907] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.707 [2024-11-19 10:22:47.408989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:33.707 pt4 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.707 [2024-11-19 10:22:47.418393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:33.707 [2024-11-19 10:22:47.420411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:33.707 [2024-11-19 10:22:47.420487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:33.707 [2024-11-19 10:22:47.420529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:33.707 [2024-11-19 10:22:47.420742] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:33.707 [2024-11-19 10:22:47.420759] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:33.707 [2024-11-19 10:22:47.421032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:33.707 [2024-11-19 10:22:47.421209] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:33.707 [2024-11-19 10:22:47.421225] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:33.707 [2024-11-19 10:22:47.421381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.707 "name": "raid_bdev1", 00:11:33.707 "uuid": "7f1a26e5-4696-42f8-81c4-151afec74055", 00:11:33.707 "strip_size_kb": 0, 00:11:33.707 "state": "online", 00:11:33.707 "raid_level": "raid1", 00:11:33.707 "superblock": true, 00:11:33.707 "num_base_bdevs": 4, 00:11:33.707 "num_base_bdevs_discovered": 4, 00:11:33.707 "num_base_bdevs_operational": 4, 00:11:33.707 "base_bdevs_list": [ 00:11:33.707 { 00:11:33.707 "name": "pt1", 00:11:33.707 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:33.707 "is_configured": true, 00:11:33.707 "data_offset": 2048, 00:11:33.707 "data_size": 63488 00:11:33.707 }, 00:11:33.707 { 00:11:33.707 "name": "pt2", 00:11:33.707 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:33.707 "is_configured": true, 00:11:33.707 "data_offset": 2048, 00:11:33.707 "data_size": 63488 00:11:33.707 }, 00:11:33.707 { 00:11:33.707 "name": "pt3", 00:11:33.707 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:33.707 "is_configured": true, 00:11:33.707 "data_offset": 2048, 00:11:33.707 "data_size": 63488 00:11:33.707 }, 00:11:33.707 { 00:11:33.707 "name": "pt4", 00:11:33.707 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:33.707 "is_configured": true, 00:11:33.707 "data_offset": 2048, 00:11:33.707 "data_size": 63488 00:11:33.707 } 00:11:33.707 ] 00:11:33.707 }' 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.707 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.277 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:34.278 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:34.278 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:34.278 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:34.278 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:34.278 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:34.278 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:34.278 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:34.278 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.278 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.278 [2024-11-19 10:22:47.897953] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.278 10:22:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.278 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:34.278 "name": "raid_bdev1", 00:11:34.278 "aliases": [ 00:11:34.278 "7f1a26e5-4696-42f8-81c4-151afec74055" 00:11:34.278 ], 00:11:34.278 "product_name": "Raid Volume", 00:11:34.278 "block_size": 512, 00:11:34.278 "num_blocks": 63488, 00:11:34.278 "uuid": "7f1a26e5-4696-42f8-81c4-151afec74055", 00:11:34.278 "assigned_rate_limits": { 00:11:34.278 "rw_ios_per_sec": 0, 00:11:34.278 "rw_mbytes_per_sec": 0, 00:11:34.278 "r_mbytes_per_sec": 0, 00:11:34.278 "w_mbytes_per_sec": 0 00:11:34.278 }, 00:11:34.278 "claimed": false, 00:11:34.278 "zoned": false, 00:11:34.278 "supported_io_types": { 00:11:34.278 "read": true, 00:11:34.278 "write": true, 00:11:34.278 "unmap": false, 00:11:34.278 "flush": false, 00:11:34.278 "reset": true, 00:11:34.278 "nvme_admin": false, 00:11:34.278 "nvme_io": false, 00:11:34.278 "nvme_io_md": false, 00:11:34.278 "write_zeroes": true, 00:11:34.278 "zcopy": false, 00:11:34.278 "get_zone_info": false, 00:11:34.278 "zone_management": false, 00:11:34.278 "zone_append": false, 00:11:34.278 "compare": false, 00:11:34.278 "compare_and_write": false, 00:11:34.278 "abort": false, 00:11:34.278 "seek_hole": false, 00:11:34.278 "seek_data": false, 00:11:34.278 "copy": false, 00:11:34.278 "nvme_iov_md": false 00:11:34.278 }, 00:11:34.278 "memory_domains": [ 00:11:34.278 { 00:11:34.278 "dma_device_id": "system", 00:11:34.278 "dma_device_type": 1 00:11:34.278 }, 00:11:34.278 { 00:11:34.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.278 "dma_device_type": 2 00:11:34.278 }, 00:11:34.278 { 00:11:34.278 "dma_device_id": "system", 00:11:34.278 "dma_device_type": 1 00:11:34.278 }, 00:11:34.278 { 00:11:34.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.278 "dma_device_type": 2 00:11:34.278 }, 00:11:34.278 { 00:11:34.278 "dma_device_id": "system", 00:11:34.278 "dma_device_type": 1 00:11:34.278 }, 00:11:34.278 { 00:11:34.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.278 "dma_device_type": 2 00:11:34.278 }, 00:11:34.278 { 00:11:34.278 "dma_device_id": "system", 00:11:34.278 "dma_device_type": 1 00:11:34.278 }, 00:11:34.278 { 00:11:34.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.278 "dma_device_type": 2 00:11:34.278 } 00:11:34.278 ], 00:11:34.278 "driver_specific": { 00:11:34.278 "raid": { 00:11:34.278 "uuid": "7f1a26e5-4696-42f8-81c4-151afec74055", 00:11:34.278 "strip_size_kb": 0, 00:11:34.278 "state": "online", 00:11:34.278 "raid_level": "raid1", 00:11:34.278 "superblock": true, 00:11:34.278 "num_base_bdevs": 4, 00:11:34.278 "num_base_bdevs_discovered": 4, 00:11:34.278 "num_base_bdevs_operational": 4, 00:11:34.278 "base_bdevs_list": [ 00:11:34.278 { 00:11:34.278 "name": "pt1", 00:11:34.278 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:34.278 "is_configured": true, 00:11:34.278 "data_offset": 2048, 00:11:34.278 "data_size": 63488 00:11:34.278 }, 00:11:34.278 { 00:11:34.278 "name": "pt2", 00:11:34.278 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:34.278 "is_configured": true, 00:11:34.278 "data_offset": 2048, 00:11:34.278 "data_size": 63488 00:11:34.278 }, 00:11:34.278 { 00:11:34.278 "name": "pt3", 00:11:34.278 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:34.278 "is_configured": true, 00:11:34.278 "data_offset": 2048, 00:11:34.278 "data_size": 63488 00:11:34.278 }, 00:11:34.278 { 00:11:34.278 "name": "pt4", 00:11:34.278 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:34.278 "is_configured": true, 00:11:34.278 "data_offset": 2048, 00:11:34.278 "data_size": 63488 00:11:34.278 } 00:11:34.278 ] 00:11:34.278 } 00:11:34.278 } 00:11:34.278 }' 00:11:34.278 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:34.278 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:34.278 pt2 00:11:34.278 pt3 00:11:34.278 pt4' 00:11:34.278 10:22:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.278 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:34.278 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.278 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:34.278 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.278 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.278 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.538 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.538 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.538 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.538 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.538 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:34.538 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.538 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.539 [2024-11-19 10:22:48.241402] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7f1a26e5-4696-42f8-81c4-151afec74055 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7f1a26e5-4696-42f8-81c4-151afec74055 ']' 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.539 [2024-11-19 10:22:48.288957] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:34.539 [2024-11-19 10:22:48.289047] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:34.539 [2024-11-19 10:22:48.289176] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:34.539 [2024-11-19 10:22:48.289290] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:34.539 [2024-11-19 10:22:48.289345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:34.539 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.799 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:34.799 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:34.799 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:34.799 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:34.799 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.799 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.799 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.799 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:34.799 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:34.799 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.800 [2024-11-19 10:22:48.452710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:34.800 [2024-11-19 10:22:48.454794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:34.800 [2024-11-19 10:22:48.454857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:34.800 [2024-11-19 10:22:48.454893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:34.800 [2024-11-19 10:22:48.454946] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:34.800 [2024-11-19 10:22:48.455019] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:34.800 [2024-11-19 10:22:48.455041] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:34.800 [2024-11-19 10:22:48.455063] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:34.800 [2024-11-19 10:22:48.455077] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:34.800 [2024-11-19 10:22:48.455090] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:34.800 request: 00:11:34.800 { 00:11:34.800 "name": "raid_bdev1", 00:11:34.800 "raid_level": "raid1", 00:11:34.800 "base_bdevs": [ 00:11:34.800 "malloc1", 00:11:34.800 "malloc2", 00:11:34.800 "malloc3", 00:11:34.800 "malloc4" 00:11:34.800 ], 00:11:34.800 "superblock": false, 00:11:34.800 "method": "bdev_raid_create", 00:11:34.800 "req_id": 1 00:11:34.800 } 00:11:34.800 Got JSON-RPC error response 00:11:34.800 response: 00:11:34.800 { 00:11:34.800 "code": -17, 00:11:34.800 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:34.800 } 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.800 [2024-11-19 10:22:48.520578] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:34.800 [2024-11-19 10:22:48.520699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.800 [2024-11-19 10:22:48.520738] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:34.800 [2024-11-19 10:22:48.520777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.800 [2024-11-19 10:22:48.523202] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.800 [2024-11-19 10:22:48.523287] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:34.800 [2024-11-19 10:22:48.523419] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:34.800 [2024-11-19 10:22:48.523518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:34.800 pt1 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.800 "name": "raid_bdev1", 00:11:34.800 "uuid": "7f1a26e5-4696-42f8-81c4-151afec74055", 00:11:34.800 "strip_size_kb": 0, 00:11:34.800 "state": "configuring", 00:11:34.800 "raid_level": "raid1", 00:11:34.800 "superblock": true, 00:11:34.800 "num_base_bdevs": 4, 00:11:34.800 "num_base_bdevs_discovered": 1, 00:11:34.800 "num_base_bdevs_operational": 4, 00:11:34.800 "base_bdevs_list": [ 00:11:34.800 { 00:11:34.800 "name": "pt1", 00:11:34.800 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:34.800 "is_configured": true, 00:11:34.800 "data_offset": 2048, 00:11:34.800 "data_size": 63488 00:11:34.800 }, 00:11:34.800 { 00:11:34.800 "name": null, 00:11:34.800 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:34.800 "is_configured": false, 00:11:34.800 "data_offset": 2048, 00:11:34.800 "data_size": 63488 00:11:34.800 }, 00:11:34.800 { 00:11:34.800 "name": null, 00:11:34.800 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:34.800 "is_configured": false, 00:11:34.800 "data_offset": 2048, 00:11:34.800 "data_size": 63488 00:11:34.800 }, 00:11:34.800 { 00:11:34.800 "name": null, 00:11:34.800 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:34.800 "is_configured": false, 00:11:34.800 "data_offset": 2048, 00:11:34.800 "data_size": 63488 00:11:34.800 } 00:11:34.800 ] 00:11:34.800 }' 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.800 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.371 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:35.371 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:35.371 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.371 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.371 [2024-11-19 10:22:48.955881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:35.371 [2024-11-19 10:22:48.955954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.371 [2024-11-19 10:22:48.955976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:35.371 [2024-11-19 10:22:48.955988] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.371 [2024-11-19 10:22:48.956466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.371 [2024-11-19 10:22:48.956494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:35.371 [2024-11-19 10:22:48.956588] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:35.371 [2024-11-19 10:22:48.956622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:35.371 pt2 00:11:35.371 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.371 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:35.371 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.371 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.371 [2024-11-19 10:22:48.967857] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:35.371 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.371 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:35.371 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.371 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.371 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.371 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.371 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.371 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.371 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.371 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.371 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.371 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.371 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.371 10:22:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.371 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.371 10:22:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.371 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.371 "name": "raid_bdev1", 00:11:35.371 "uuid": "7f1a26e5-4696-42f8-81c4-151afec74055", 00:11:35.371 "strip_size_kb": 0, 00:11:35.371 "state": "configuring", 00:11:35.371 "raid_level": "raid1", 00:11:35.371 "superblock": true, 00:11:35.371 "num_base_bdevs": 4, 00:11:35.371 "num_base_bdevs_discovered": 1, 00:11:35.371 "num_base_bdevs_operational": 4, 00:11:35.371 "base_bdevs_list": [ 00:11:35.371 { 00:11:35.371 "name": "pt1", 00:11:35.371 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:35.371 "is_configured": true, 00:11:35.371 "data_offset": 2048, 00:11:35.371 "data_size": 63488 00:11:35.371 }, 00:11:35.371 { 00:11:35.371 "name": null, 00:11:35.371 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:35.371 "is_configured": false, 00:11:35.371 "data_offset": 0, 00:11:35.371 "data_size": 63488 00:11:35.371 }, 00:11:35.371 { 00:11:35.371 "name": null, 00:11:35.371 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:35.371 "is_configured": false, 00:11:35.371 "data_offset": 2048, 00:11:35.371 "data_size": 63488 00:11:35.371 }, 00:11:35.371 { 00:11:35.371 "name": null, 00:11:35.371 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:35.371 "is_configured": false, 00:11:35.371 "data_offset": 2048, 00:11:35.371 "data_size": 63488 00:11:35.371 } 00:11:35.371 ] 00:11:35.371 }' 00:11:35.371 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.371 10:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.939 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:35.939 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:35.939 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:35.939 10:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.939 10:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.939 [2024-11-19 10:22:49.455093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:35.939 [2024-11-19 10:22:49.455168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.939 [2024-11-19 10:22:49.455207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:35.939 [2024-11-19 10:22:49.455221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.939 [2024-11-19 10:22:49.455725] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.939 [2024-11-19 10:22:49.455756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:35.940 [2024-11-19 10:22:49.455858] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:35.940 [2024-11-19 10:22:49.455886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:35.940 pt2 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.940 [2024-11-19 10:22:49.467054] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:35.940 [2024-11-19 10:22:49.467112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.940 [2024-11-19 10:22:49.467135] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:35.940 [2024-11-19 10:22:49.467145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.940 [2024-11-19 10:22:49.467623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.940 [2024-11-19 10:22:49.467658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:35.940 [2024-11-19 10:22:49.467750] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:35.940 [2024-11-19 10:22:49.467772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:35.940 pt3 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.940 [2024-11-19 10:22:49.478987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:35.940 [2024-11-19 10:22:49.479047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.940 [2024-11-19 10:22:49.479066] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:35.940 [2024-11-19 10:22:49.479075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.940 [2024-11-19 10:22:49.479524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.940 [2024-11-19 10:22:49.479559] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:35.940 [2024-11-19 10:22:49.479638] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:35.940 [2024-11-19 10:22:49.479661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:35.940 [2024-11-19 10:22:49.479831] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:35.940 [2024-11-19 10:22:49.479847] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:35.940 [2024-11-19 10:22:49.480143] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:35.940 [2024-11-19 10:22:49.480312] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:35.940 [2024-11-19 10:22:49.480334] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:35.940 [2024-11-19 10:22:49.480492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.940 pt4 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.940 "name": "raid_bdev1", 00:11:35.940 "uuid": "7f1a26e5-4696-42f8-81c4-151afec74055", 00:11:35.940 "strip_size_kb": 0, 00:11:35.940 "state": "online", 00:11:35.940 "raid_level": "raid1", 00:11:35.940 "superblock": true, 00:11:35.940 "num_base_bdevs": 4, 00:11:35.940 "num_base_bdevs_discovered": 4, 00:11:35.940 "num_base_bdevs_operational": 4, 00:11:35.940 "base_bdevs_list": [ 00:11:35.940 { 00:11:35.940 "name": "pt1", 00:11:35.940 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:35.940 "is_configured": true, 00:11:35.940 "data_offset": 2048, 00:11:35.940 "data_size": 63488 00:11:35.940 }, 00:11:35.940 { 00:11:35.940 "name": "pt2", 00:11:35.940 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:35.940 "is_configured": true, 00:11:35.940 "data_offset": 2048, 00:11:35.940 "data_size": 63488 00:11:35.940 }, 00:11:35.940 { 00:11:35.940 "name": "pt3", 00:11:35.940 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:35.940 "is_configured": true, 00:11:35.940 "data_offset": 2048, 00:11:35.940 "data_size": 63488 00:11:35.940 }, 00:11:35.940 { 00:11:35.940 "name": "pt4", 00:11:35.940 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:35.940 "is_configured": true, 00:11:35.940 "data_offset": 2048, 00:11:35.940 "data_size": 63488 00:11:35.940 } 00:11:35.940 ] 00:11:35.940 }' 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.940 10:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.246 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:36.246 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:36.246 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:36.246 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:36.246 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:36.246 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:36.246 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:36.246 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:36.246 10:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.246 10:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.246 [2024-11-19 10:22:49.946622] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.246 10:22:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.246 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:36.246 "name": "raid_bdev1", 00:11:36.246 "aliases": [ 00:11:36.246 "7f1a26e5-4696-42f8-81c4-151afec74055" 00:11:36.246 ], 00:11:36.246 "product_name": "Raid Volume", 00:11:36.246 "block_size": 512, 00:11:36.246 "num_blocks": 63488, 00:11:36.246 "uuid": "7f1a26e5-4696-42f8-81c4-151afec74055", 00:11:36.246 "assigned_rate_limits": { 00:11:36.246 "rw_ios_per_sec": 0, 00:11:36.246 "rw_mbytes_per_sec": 0, 00:11:36.246 "r_mbytes_per_sec": 0, 00:11:36.246 "w_mbytes_per_sec": 0 00:11:36.246 }, 00:11:36.246 "claimed": false, 00:11:36.246 "zoned": false, 00:11:36.246 "supported_io_types": { 00:11:36.246 "read": true, 00:11:36.246 "write": true, 00:11:36.246 "unmap": false, 00:11:36.246 "flush": false, 00:11:36.246 "reset": true, 00:11:36.246 "nvme_admin": false, 00:11:36.246 "nvme_io": false, 00:11:36.246 "nvme_io_md": false, 00:11:36.246 "write_zeroes": true, 00:11:36.246 "zcopy": false, 00:11:36.246 "get_zone_info": false, 00:11:36.246 "zone_management": false, 00:11:36.246 "zone_append": false, 00:11:36.246 "compare": false, 00:11:36.246 "compare_and_write": false, 00:11:36.246 "abort": false, 00:11:36.246 "seek_hole": false, 00:11:36.246 "seek_data": false, 00:11:36.246 "copy": false, 00:11:36.246 "nvme_iov_md": false 00:11:36.246 }, 00:11:36.246 "memory_domains": [ 00:11:36.246 { 00:11:36.246 "dma_device_id": "system", 00:11:36.246 "dma_device_type": 1 00:11:36.246 }, 00:11:36.246 { 00:11:36.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.246 "dma_device_type": 2 00:11:36.246 }, 00:11:36.246 { 00:11:36.246 "dma_device_id": "system", 00:11:36.246 "dma_device_type": 1 00:11:36.246 }, 00:11:36.246 { 00:11:36.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.246 "dma_device_type": 2 00:11:36.246 }, 00:11:36.246 { 00:11:36.246 "dma_device_id": "system", 00:11:36.246 "dma_device_type": 1 00:11:36.246 }, 00:11:36.246 { 00:11:36.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.246 "dma_device_type": 2 00:11:36.246 }, 00:11:36.246 { 00:11:36.246 "dma_device_id": "system", 00:11:36.246 "dma_device_type": 1 00:11:36.246 }, 00:11:36.246 { 00:11:36.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.246 "dma_device_type": 2 00:11:36.246 } 00:11:36.246 ], 00:11:36.246 "driver_specific": { 00:11:36.246 "raid": { 00:11:36.246 "uuid": "7f1a26e5-4696-42f8-81c4-151afec74055", 00:11:36.246 "strip_size_kb": 0, 00:11:36.246 "state": "online", 00:11:36.246 "raid_level": "raid1", 00:11:36.246 "superblock": true, 00:11:36.246 "num_base_bdevs": 4, 00:11:36.246 "num_base_bdevs_discovered": 4, 00:11:36.246 "num_base_bdevs_operational": 4, 00:11:36.246 "base_bdevs_list": [ 00:11:36.246 { 00:11:36.246 "name": "pt1", 00:11:36.246 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:36.246 "is_configured": true, 00:11:36.246 "data_offset": 2048, 00:11:36.246 "data_size": 63488 00:11:36.246 }, 00:11:36.246 { 00:11:36.246 "name": "pt2", 00:11:36.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:36.246 "is_configured": true, 00:11:36.246 "data_offset": 2048, 00:11:36.246 "data_size": 63488 00:11:36.246 }, 00:11:36.246 { 00:11:36.246 "name": "pt3", 00:11:36.246 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:36.246 "is_configured": true, 00:11:36.246 "data_offset": 2048, 00:11:36.246 "data_size": 63488 00:11:36.246 }, 00:11:36.246 { 00:11:36.246 "name": "pt4", 00:11:36.246 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:36.246 "is_configured": true, 00:11:36.246 "data_offset": 2048, 00:11:36.246 "data_size": 63488 00:11:36.246 } 00:11:36.246 ] 00:11:36.246 } 00:11:36.246 } 00:11:36.246 }' 00:11:36.246 10:22:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:36.506 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:36.506 pt2 00:11:36.506 pt3 00:11:36.506 pt4' 00:11:36.506 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.506 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:36.506 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.506 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:36.506 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.506 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.506 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.506 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.506 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.506 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.506 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.506 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.506 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:36.506 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.506 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.506 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.506 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.506 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.506 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.506 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:36.506 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.506 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.507 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.507 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.507 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.507 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.507 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.507 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:36.507 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.507 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.507 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.507 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:36.767 [2024-11-19 10:22:50.298038] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7f1a26e5-4696-42f8-81c4-151afec74055 '!=' 7f1a26e5-4696-42f8-81c4-151afec74055 ']' 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.767 [2024-11-19 10:22:50.345637] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.767 "name": "raid_bdev1", 00:11:36.767 "uuid": "7f1a26e5-4696-42f8-81c4-151afec74055", 00:11:36.767 "strip_size_kb": 0, 00:11:36.767 "state": "online", 00:11:36.767 "raid_level": "raid1", 00:11:36.767 "superblock": true, 00:11:36.767 "num_base_bdevs": 4, 00:11:36.767 "num_base_bdevs_discovered": 3, 00:11:36.767 "num_base_bdevs_operational": 3, 00:11:36.767 "base_bdevs_list": [ 00:11:36.767 { 00:11:36.767 "name": null, 00:11:36.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.767 "is_configured": false, 00:11:36.767 "data_offset": 0, 00:11:36.767 "data_size": 63488 00:11:36.767 }, 00:11:36.767 { 00:11:36.767 "name": "pt2", 00:11:36.767 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:36.767 "is_configured": true, 00:11:36.767 "data_offset": 2048, 00:11:36.767 "data_size": 63488 00:11:36.767 }, 00:11:36.767 { 00:11:36.767 "name": "pt3", 00:11:36.767 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:36.767 "is_configured": true, 00:11:36.767 "data_offset": 2048, 00:11:36.767 "data_size": 63488 00:11:36.767 }, 00:11:36.767 { 00:11:36.767 "name": "pt4", 00:11:36.767 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:36.767 "is_configured": true, 00:11:36.767 "data_offset": 2048, 00:11:36.767 "data_size": 63488 00:11:36.767 } 00:11:36.767 ] 00:11:36.767 }' 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.767 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.027 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:37.027 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.027 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.027 [2024-11-19 10:22:50.760895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:37.027 [2024-11-19 10:22:50.761013] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:37.027 [2024-11-19 10:22:50.761149] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:37.027 [2024-11-19 10:22:50.761293] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:37.027 [2024-11-19 10:22:50.761355] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:37.027 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.027 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.027 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.027 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.027 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:37.027 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.288 [2024-11-19 10:22:50.860713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:37.288 [2024-11-19 10:22:50.860780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.288 [2024-11-19 10:22:50.860808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:37.288 [2024-11-19 10:22:50.860821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.288 [2024-11-19 10:22:50.863429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.288 [2024-11-19 10:22:50.863524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:37.288 [2024-11-19 10:22:50.863654] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:37.288 [2024-11-19 10:22:50.863727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:37.288 pt2 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.288 "name": "raid_bdev1", 00:11:37.288 "uuid": "7f1a26e5-4696-42f8-81c4-151afec74055", 00:11:37.288 "strip_size_kb": 0, 00:11:37.288 "state": "configuring", 00:11:37.288 "raid_level": "raid1", 00:11:37.288 "superblock": true, 00:11:37.288 "num_base_bdevs": 4, 00:11:37.288 "num_base_bdevs_discovered": 1, 00:11:37.288 "num_base_bdevs_operational": 3, 00:11:37.288 "base_bdevs_list": [ 00:11:37.288 { 00:11:37.288 "name": null, 00:11:37.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.288 "is_configured": false, 00:11:37.288 "data_offset": 2048, 00:11:37.288 "data_size": 63488 00:11:37.288 }, 00:11:37.288 { 00:11:37.288 "name": "pt2", 00:11:37.288 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:37.288 "is_configured": true, 00:11:37.288 "data_offset": 2048, 00:11:37.288 "data_size": 63488 00:11:37.288 }, 00:11:37.288 { 00:11:37.288 "name": null, 00:11:37.288 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:37.288 "is_configured": false, 00:11:37.288 "data_offset": 2048, 00:11:37.288 "data_size": 63488 00:11:37.288 }, 00:11:37.288 { 00:11:37.288 "name": null, 00:11:37.288 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:37.288 "is_configured": false, 00:11:37.288 "data_offset": 2048, 00:11:37.288 "data_size": 63488 00:11:37.288 } 00:11:37.288 ] 00:11:37.288 }' 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.288 10:22:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.859 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:37.859 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:37.859 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:37.859 10:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.859 10:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.859 [2024-11-19 10:22:51.383912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:37.859 [2024-11-19 10:22:51.384046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.859 [2024-11-19 10:22:51.384097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:37.859 [2024-11-19 10:22:51.384160] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.859 [2024-11-19 10:22:51.384702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.859 [2024-11-19 10:22:51.384770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:37.859 [2024-11-19 10:22:51.384903] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:37.859 [2024-11-19 10:22:51.384960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:37.859 pt3 00:11:37.859 10:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.859 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:37.859 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.859 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:37.860 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.860 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.860 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:37.860 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.860 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.860 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.860 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.860 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.860 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.860 10:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.860 10:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.860 10:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.860 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.860 "name": "raid_bdev1", 00:11:37.860 "uuid": "7f1a26e5-4696-42f8-81c4-151afec74055", 00:11:37.860 "strip_size_kb": 0, 00:11:37.860 "state": "configuring", 00:11:37.860 "raid_level": "raid1", 00:11:37.860 "superblock": true, 00:11:37.860 "num_base_bdevs": 4, 00:11:37.860 "num_base_bdevs_discovered": 2, 00:11:37.860 "num_base_bdevs_operational": 3, 00:11:37.860 "base_bdevs_list": [ 00:11:37.860 { 00:11:37.860 "name": null, 00:11:37.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.860 "is_configured": false, 00:11:37.860 "data_offset": 2048, 00:11:37.860 "data_size": 63488 00:11:37.860 }, 00:11:37.860 { 00:11:37.860 "name": "pt2", 00:11:37.860 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:37.860 "is_configured": true, 00:11:37.860 "data_offset": 2048, 00:11:37.860 "data_size": 63488 00:11:37.860 }, 00:11:37.860 { 00:11:37.860 "name": "pt3", 00:11:37.860 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:37.860 "is_configured": true, 00:11:37.860 "data_offset": 2048, 00:11:37.860 "data_size": 63488 00:11:37.860 }, 00:11:37.860 { 00:11:37.860 "name": null, 00:11:37.860 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:37.860 "is_configured": false, 00:11:37.860 "data_offset": 2048, 00:11:37.860 "data_size": 63488 00:11:37.860 } 00:11:37.860 ] 00:11:37.860 }' 00:11:37.860 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.860 10:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.120 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:38.120 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:38.120 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:11:38.120 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:38.120 10:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.120 10:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.120 [2024-11-19 10:22:51.819367] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:38.120 [2024-11-19 10:22:51.819498] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.120 [2024-11-19 10:22:51.819545] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:11:38.120 [2024-11-19 10:22:51.819583] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.120 [2024-11-19 10:22:51.820139] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.120 [2024-11-19 10:22:51.820205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:38.120 [2024-11-19 10:22:51.820335] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:38.120 [2024-11-19 10:22:51.820406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:38.120 [2024-11-19 10:22:51.820603] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:38.120 [2024-11-19 10:22:51.820647] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:38.120 [2024-11-19 10:22:51.820945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:38.120 [2024-11-19 10:22:51.821191] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:38.120 [2024-11-19 10:22:51.821251] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:38.120 [2024-11-19 10:22:51.821492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.120 pt4 00:11:38.120 10:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.120 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:38.120 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.120 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.120 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.120 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.120 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.120 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.120 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.121 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.121 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.121 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.121 10:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.121 10:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.121 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.121 10:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.121 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.121 "name": "raid_bdev1", 00:11:38.121 "uuid": "7f1a26e5-4696-42f8-81c4-151afec74055", 00:11:38.121 "strip_size_kb": 0, 00:11:38.121 "state": "online", 00:11:38.121 "raid_level": "raid1", 00:11:38.121 "superblock": true, 00:11:38.121 "num_base_bdevs": 4, 00:11:38.121 "num_base_bdevs_discovered": 3, 00:11:38.121 "num_base_bdevs_operational": 3, 00:11:38.121 "base_bdevs_list": [ 00:11:38.121 { 00:11:38.121 "name": null, 00:11:38.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.121 "is_configured": false, 00:11:38.121 "data_offset": 2048, 00:11:38.121 "data_size": 63488 00:11:38.121 }, 00:11:38.121 { 00:11:38.121 "name": "pt2", 00:11:38.121 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:38.121 "is_configured": true, 00:11:38.121 "data_offset": 2048, 00:11:38.121 "data_size": 63488 00:11:38.121 }, 00:11:38.121 { 00:11:38.121 "name": "pt3", 00:11:38.121 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:38.121 "is_configured": true, 00:11:38.121 "data_offset": 2048, 00:11:38.121 "data_size": 63488 00:11:38.121 }, 00:11:38.121 { 00:11:38.121 "name": "pt4", 00:11:38.121 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:38.121 "is_configured": true, 00:11:38.121 "data_offset": 2048, 00:11:38.121 "data_size": 63488 00:11:38.121 } 00:11:38.121 ] 00:11:38.121 }' 00:11:38.121 10:22:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.121 10:22:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.691 [2024-11-19 10:22:52.306572] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:38.691 [2024-11-19 10:22:52.306678] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:38.691 [2024-11-19 10:22:52.306857] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:38.691 [2024-11-19 10:22:52.306977] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:38.691 [2024-11-19 10:22:52.307056] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.691 [2024-11-19 10:22:52.382445] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:38.691 [2024-11-19 10:22:52.382515] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.691 [2024-11-19 10:22:52.382536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:11:38.691 [2024-11-19 10:22:52.382550] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.691 [2024-11-19 10:22:52.384890] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.691 [2024-11-19 10:22:52.384936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:38.691 [2024-11-19 10:22:52.385039] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:38.691 [2024-11-19 10:22:52.385097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:38.691 [2024-11-19 10:22:52.385240] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:38.691 [2024-11-19 10:22:52.385254] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:38.691 [2024-11-19 10:22:52.385271] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:38.691 [2024-11-19 10:22:52.385348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:38.691 [2024-11-19 10:22:52.385498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:38.691 pt1 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.691 "name": "raid_bdev1", 00:11:38.691 "uuid": "7f1a26e5-4696-42f8-81c4-151afec74055", 00:11:38.691 "strip_size_kb": 0, 00:11:38.691 "state": "configuring", 00:11:38.691 "raid_level": "raid1", 00:11:38.691 "superblock": true, 00:11:38.691 "num_base_bdevs": 4, 00:11:38.691 "num_base_bdevs_discovered": 2, 00:11:38.691 "num_base_bdevs_operational": 3, 00:11:38.691 "base_bdevs_list": [ 00:11:38.691 { 00:11:38.691 "name": null, 00:11:38.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.691 "is_configured": false, 00:11:38.691 "data_offset": 2048, 00:11:38.691 "data_size": 63488 00:11:38.691 }, 00:11:38.691 { 00:11:38.691 "name": "pt2", 00:11:38.691 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:38.691 "is_configured": true, 00:11:38.691 "data_offset": 2048, 00:11:38.691 "data_size": 63488 00:11:38.691 }, 00:11:38.691 { 00:11:38.691 "name": "pt3", 00:11:38.691 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:38.691 "is_configured": true, 00:11:38.691 "data_offset": 2048, 00:11:38.691 "data_size": 63488 00:11:38.691 }, 00:11:38.691 { 00:11:38.691 "name": null, 00:11:38.691 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:38.691 "is_configured": false, 00:11:38.691 "data_offset": 2048, 00:11:38.691 "data_size": 63488 00:11:38.691 } 00:11:38.691 ] 00:11:38.691 }' 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.691 10:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.261 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:39.261 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:39.261 10:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.261 10:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.261 10:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.261 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:39.261 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:39.261 10:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.261 10:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.261 [2024-11-19 10:22:52.913606] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:39.261 [2024-11-19 10:22:52.913721] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.261 [2024-11-19 10:22:52.913783] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:39.261 [2024-11-19 10:22:52.913853] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.261 [2024-11-19 10:22:52.914378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.261 [2024-11-19 10:22:52.914441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:39.261 [2024-11-19 10:22:52.914574] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:39.261 [2024-11-19 10:22:52.914640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:39.261 [2024-11-19 10:22:52.914825] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:39.261 [2024-11-19 10:22:52.914868] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:39.261 [2024-11-19 10:22:52.915177] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:11:39.261 [2024-11-19 10:22:52.915400] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:39.261 [2024-11-19 10:22:52.915448] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:39.261 [2024-11-19 10:22:52.915667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.261 pt4 00:11:39.261 10:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.261 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:39.261 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.261 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.261 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.261 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.261 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:39.261 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.261 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.261 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.261 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.261 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.261 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.261 10:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.261 10:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.261 10:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.261 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.261 "name": "raid_bdev1", 00:11:39.262 "uuid": "7f1a26e5-4696-42f8-81c4-151afec74055", 00:11:39.262 "strip_size_kb": 0, 00:11:39.262 "state": "online", 00:11:39.262 "raid_level": "raid1", 00:11:39.262 "superblock": true, 00:11:39.262 "num_base_bdevs": 4, 00:11:39.262 "num_base_bdevs_discovered": 3, 00:11:39.262 "num_base_bdevs_operational": 3, 00:11:39.262 "base_bdevs_list": [ 00:11:39.262 { 00:11:39.262 "name": null, 00:11:39.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.262 "is_configured": false, 00:11:39.262 "data_offset": 2048, 00:11:39.262 "data_size": 63488 00:11:39.262 }, 00:11:39.262 { 00:11:39.262 "name": "pt2", 00:11:39.262 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:39.262 "is_configured": true, 00:11:39.262 "data_offset": 2048, 00:11:39.262 "data_size": 63488 00:11:39.262 }, 00:11:39.262 { 00:11:39.262 "name": "pt3", 00:11:39.262 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:39.262 "is_configured": true, 00:11:39.262 "data_offset": 2048, 00:11:39.262 "data_size": 63488 00:11:39.262 }, 00:11:39.262 { 00:11:39.262 "name": "pt4", 00:11:39.262 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:39.262 "is_configured": true, 00:11:39.262 "data_offset": 2048, 00:11:39.262 "data_size": 63488 00:11:39.262 } 00:11:39.262 ] 00:11:39.262 }' 00:11:39.262 10:22:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.262 10:22:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.832 10:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:39.832 10:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.832 10:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:39.832 10:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.832 10:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.832 10:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:39.832 10:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:39.832 10:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:39.832 10:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.832 10:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.832 [2024-11-19 10:22:53.480987] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:39.832 10:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.832 10:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 7f1a26e5-4696-42f8-81c4-151afec74055 '!=' 7f1a26e5-4696-42f8-81c4-151afec74055 ']' 00:11:39.832 10:22:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74278 00:11:39.832 10:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74278 ']' 00:11:39.832 10:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74278 00:11:39.832 10:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:39.832 10:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:39.832 10:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74278 00:11:39.832 killing process with pid 74278 00:11:39.832 10:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:39.832 10:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:39.832 10:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74278' 00:11:39.832 10:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74278 00:11:39.832 [2024-11-19 10:22:53.555888] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:39.832 [2024-11-19 10:22:53.556011] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:39.832 10:22:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74278 00:11:39.832 [2024-11-19 10:22:53.556096] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:39.832 [2024-11-19 10:22:53.556111] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:40.402 [2024-11-19 10:22:54.010883] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:41.783 10:22:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:41.783 ************************************ 00:11:41.783 END TEST raid_superblock_test 00:11:41.783 ************************************ 00:11:41.783 00:11:41.783 real 0m9.043s 00:11:41.783 user 0m14.199s 00:11:41.783 sys 0m1.647s 00:11:41.783 10:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.783 10:22:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.783 10:22:55 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:11:41.783 10:22:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:41.783 10:22:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.783 10:22:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:41.783 ************************************ 00:11:41.783 START TEST raid_read_error_test 00:11:41.783 ************************************ 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.RHP40k05U2 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74771 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74771 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74771 ']' 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.783 10:22:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.783 [2024-11-19 10:22:55.418826] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:11:41.783 [2024-11-19 10:22:55.419315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74771 ] 00:11:42.043 [2024-11-19 10:22:55.593878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.043 [2024-11-19 10:22:55.709452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.302 [2024-11-19 10:22:55.907622] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.302 [2024-11-19 10:22:55.907765] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.563 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.563 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:42.563 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:42.563 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:42.563 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.563 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.563 BaseBdev1_malloc 00:11:42.563 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.563 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:42.563 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.563 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.563 true 00:11:42.563 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.563 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:42.563 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.563 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.563 [2024-11-19 10:22:56.299833] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:42.563 [2024-11-19 10:22:56.299892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.563 [2024-11-19 10:22:56.299912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:42.563 [2024-11-19 10:22:56.299924] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.563 [2024-11-19 10:22:56.302043] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.563 [2024-11-19 10:22:56.302080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:42.563 BaseBdev1 00:11:42.563 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.563 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:42.563 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:42.563 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.563 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.823 BaseBdev2_malloc 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.823 true 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.823 [2024-11-19 10:22:56.357545] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:42.823 [2024-11-19 10:22:56.357603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.823 [2024-11-19 10:22:56.357621] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:42.823 [2024-11-19 10:22:56.357631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.823 [2024-11-19 10:22:56.359733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.823 BaseBdev2 00:11:42.823 [2024-11-19 10:22:56.359832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.823 BaseBdev3_malloc 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.823 true 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.823 [2024-11-19 10:22:56.426776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:42.823 [2024-11-19 10:22:56.426829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.823 [2024-11-19 10:22:56.426844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:42.823 [2024-11-19 10:22:56.426854] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.823 [2024-11-19 10:22:56.428933] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.823 [2024-11-19 10:22:56.428970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:42.823 BaseBdev3 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.823 BaseBdev4_malloc 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.823 true 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.823 [2024-11-19 10:22:56.493185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:42.823 [2024-11-19 10:22:56.493291] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.823 [2024-11-19 10:22:56.493311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:42.823 [2024-11-19 10:22:56.493337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.823 [2024-11-19 10:22:56.495375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.823 [2024-11-19 10:22:56.495413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:42.823 BaseBdev4 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.823 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.823 [2024-11-19 10:22:56.505221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:42.823 [2024-11-19 10:22:56.507080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:42.823 [2024-11-19 10:22:56.507165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:42.823 [2024-11-19 10:22:56.507247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:42.823 [2024-11-19 10:22:56.507488] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:42.824 [2024-11-19 10:22:56.507503] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:42.824 [2024-11-19 10:22:56.507764] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:42.824 [2024-11-19 10:22:56.507925] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:42.824 [2024-11-19 10:22:56.507935] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:42.824 [2024-11-19 10:22:56.508126] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.824 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.824 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:42.824 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.824 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.824 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.824 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.824 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.824 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.824 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.824 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.824 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.824 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.824 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.824 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.824 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.824 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.824 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.824 "name": "raid_bdev1", 00:11:42.824 "uuid": "0c1345dc-6dcb-40af-849c-7dd70e2cbaf3", 00:11:42.824 "strip_size_kb": 0, 00:11:42.824 "state": "online", 00:11:42.824 "raid_level": "raid1", 00:11:42.824 "superblock": true, 00:11:42.824 "num_base_bdevs": 4, 00:11:42.824 "num_base_bdevs_discovered": 4, 00:11:42.824 "num_base_bdevs_operational": 4, 00:11:42.824 "base_bdevs_list": [ 00:11:42.824 { 00:11:42.824 "name": "BaseBdev1", 00:11:42.824 "uuid": "39224c16-2aed-54fb-83b9-8241958cc72c", 00:11:42.824 "is_configured": true, 00:11:42.824 "data_offset": 2048, 00:11:42.824 "data_size": 63488 00:11:42.824 }, 00:11:42.824 { 00:11:42.824 "name": "BaseBdev2", 00:11:42.824 "uuid": "da1e7da3-2e86-5d98-bceb-245a71d431b6", 00:11:42.824 "is_configured": true, 00:11:42.824 "data_offset": 2048, 00:11:42.824 "data_size": 63488 00:11:42.824 }, 00:11:42.824 { 00:11:42.824 "name": "BaseBdev3", 00:11:42.824 "uuid": "a3fa0a6b-cdca-5cff-b3a8-8b609aa133bf", 00:11:42.824 "is_configured": true, 00:11:42.824 "data_offset": 2048, 00:11:42.824 "data_size": 63488 00:11:42.824 }, 00:11:42.824 { 00:11:42.824 "name": "BaseBdev4", 00:11:42.824 "uuid": "81edcf12-4258-58c6-8719-8c589c54db9a", 00:11:42.824 "is_configured": true, 00:11:42.824 "data_offset": 2048, 00:11:42.824 "data_size": 63488 00:11:42.824 } 00:11:42.824 ] 00:11:42.824 }' 00:11:42.824 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.824 10:22:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.393 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:43.393 10:22:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:43.393 [2024-11-19 10:22:57.013708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:44.331 10:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:44.331 10:22:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.331 10:22:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.331 10:22:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.331 10:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:44.331 10:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:44.331 10:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:44.331 10:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:44.331 10:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:44.331 10:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:44.331 10:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.331 10:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.331 10:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.331 10:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.331 10:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.331 10:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.331 10:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.331 10:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.331 10:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.331 10:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.331 10:22:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.331 10:22:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.331 10:22:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.331 10:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.331 "name": "raid_bdev1", 00:11:44.331 "uuid": "0c1345dc-6dcb-40af-849c-7dd70e2cbaf3", 00:11:44.331 "strip_size_kb": 0, 00:11:44.331 "state": "online", 00:11:44.331 "raid_level": "raid1", 00:11:44.331 "superblock": true, 00:11:44.331 "num_base_bdevs": 4, 00:11:44.331 "num_base_bdevs_discovered": 4, 00:11:44.331 "num_base_bdevs_operational": 4, 00:11:44.331 "base_bdevs_list": [ 00:11:44.331 { 00:11:44.331 "name": "BaseBdev1", 00:11:44.331 "uuid": "39224c16-2aed-54fb-83b9-8241958cc72c", 00:11:44.331 "is_configured": true, 00:11:44.331 "data_offset": 2048, 00:11:44.331 "data_size": 63488 00:11:44.331 }, 00:11:44.331 { 00:11:44.331 "name": "BaseBdev2", 00:11:44.331 "uuid": "da1e7da3-2e86-5d98-bceb-245a71d431b6", 00:11:44.331 "is_configured": true, 00:11:44.331 "data_offset": 2048, 00:11:44.331 "data_size": 63488 00:11:44.331 }, 00:11:44.331 { 00:11:44.331 "name": "BaseBdev3", 00:11:44.331 "uuid": "a3fa0a6b-cdca-5cff-b3a8-8b609aa133bf", 00:11:44.331 "is_configured": true, 00:11:44.331 "data_offset": 2048, 00:11:44.331 "data_size": 63488 00:11:44.331 }, 00:11:44.331 { 00:11:44.331 "name": "BaseBdev4", 00:11:44.331 "uuid": "81edcf12-4258-58c6-8719-8c589c54db9a", 00:11:44.331 "is_configured": true, 00:11:44.331 "data_offset": 2048, 00:11:44.331 "data_size": 63488 00:11:44.331 } 00:11:44.331 ] 00:11:44.331 }' 00:11:44.331 10:22:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.331 10:22:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.901 10:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:44.901 10:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.901 10:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.901 [2024-11-19 10:22:58.378446] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:44.901 [2024-11-19 10:22:58.378481] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:44.901 [2024-11-19 10:22:58.381006] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:44.901 [2024-11-19 10:22:58.381066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.901 [2024-11-19 10:22:58.381181] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:44.901 [2024-11-19 10:22:58.381193] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:44.901 { 00:11:44.901 "results": [ 00:11:44.901 { 00:11:44.901 "job": "raid_bdev1", 00:11:44.901 "core_mask": "0x1", 00:11:44.901 "workload": "randrw", 00:11:44.901 "percentage": 50, 00:11:44.901 "status": "finished", 00:11:44.901 "queue_depth": 1, 00:11:44.901 "io_size": 131072, 00:11:44.901 "runtime": 1.365574, 00:11:44.901 "iops": 11047.369091678664, 00:11:44.901 "mibps": 1380.921136459833, 00:11:44.901 "io_failed": 0, 00:11:44.901 "io_timeout": 0, 00:11:44.901 "avg_latency_us": 87.97526750560253, 00:11:44.901 "min_latency_us": 21.799126637554586, 00:11:44.901 "max_latency_us": 1545.3903930131005 00:11:44.901 } 00:11:44.901 ], 00:11:44.901 "core_count": 1 00:11:44.901 } 00:11:44.901 10:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.901 10:22:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74771 00:11:44.901 10:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74771 ']' 00:11:44.901 10:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74771 00:11:44.901 10:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:44.901 10:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:44.901 10:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74771 00:11:44.901 10:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:44.901 killing process with pid 74771 00:11:44.901 10:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:44.901 10:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74771' 00:11:44.901 10:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74771 00:11:44.901 [2024-11-19 10:22:58.418474] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:44.901 10:22:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74771 00:11:45.172 [2024-11-19 10:22:58.738646] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:46.131 10:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:46.131 10:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.RHP40k05U2 00:11:46.131 10:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:46.131 10:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:46.131 10:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:46.131 10:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:46.131 10:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:46.131 10:22:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:46.131 00:11:46.131 real 0m4.576s 00:11:46.131 user 0m5.372s 00:11:46.131 sys 0m0.577s 00:11:46.131 10:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.131 10:22:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.131 ************************************ 00:11:46.131 END TEST raid_read_error_test 00:11:46.131 ************************************ 00:11:46.392 10:22:59 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:11:46.392 10:22:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:46.392 10:22:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.392 10:22:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:46.392 ************************************ 00:11:46.392 START TEST raid_write_error_test 00:11:46.392 ************************************ 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.legMNwhMyI 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74917 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74917 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 74917 ']' 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:46.392 10:22:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.392 [2024-11-19 10:23:00.059891] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:11:46.392 [2024-11-19 10:23:00.060122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74917 ] 00:11:46.652 [2024-11-19 10:23:00.231554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.652 [2024-11-19 10:23:00.340850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.912 [2024-11-19 10:23:00.542486] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.912 [2024-11-19 10:23:00.542589] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:47.173 10:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:47.173 10:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:47.173 10:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:47.173 10:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:47.173 10:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.173 10:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.173 BaseBdev1_malloc 00:11:47.173 10:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.173 10:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:47.173 10:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.173 10:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.173 true 00:11:47.173 10:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.173 10:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:47.173 10:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.173 10:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.434 [2024-11-19 10:23:00.954238] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:47.434 [2024-11-19 10:23:00.954335] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.434 [2024-11-19 10:23:00.954359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:47.434 [2024-11-19 10:23:00.954370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.434 [2024-11-19 10:23:00.956441] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.434 [2024-11-19 10:23:00.956481] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:47.434 BaseBdev1 00:11:47.434 10:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.434 10:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:47.434 10:23:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:47.434 10:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.434 10:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.434 BaseBdev2_malloc 00:11:47.434 10:23:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.434 true 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.434 [2024-11-19 10:23:01.019734] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:47.434 [2024-11-19 10:23:01.019787] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.434 [2024-11-19 10:23:01.019804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:47.434 [2024-11-19 10:23:01.019814] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.434 [2024-11-19 10:23:01.021925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.434 [2024-11-19 10:23:01.021962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:47.434 BaseBdev2 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.434 BaseBdev3_malloc 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.434 true 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.434 [2024-11-19 10:23:01.093755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:47.434 [2024-11-19 10:23:01.093805] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.434 [2024-11-19 10:23:01.093822] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:47.434 [2024-11-19 10:23:01.093832] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.434 [2024-11-19 10:23:01.095953] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.434 [2024-11-19 10:23:01.096010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:47.434 BaseBdev3 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.434 BaseBdev4_malloc 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.434 true 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.434 [2024-11-19 10:23:01.158436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:47.434 [2024-11-19 10:23:01.158487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:47.434 [2024-11-19 10:23:01.158521] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:47.434 [2024-11-19 10:23:01.158530] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:47.434 [2024-11-19 10:23:01.160548] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:47.434 [2024-11-19 10:23:01.160644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:47.434 BaseBdev4 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.434 10:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.434 [2024-11-19 10:23:01.170474] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:47.434 [2024-11-19 10:23:01.172315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:47.435 [2024-11-19 10:23:01.172441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:47.435 [2024-11-19 10:23:01.172542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:47.435 [2024-11-19 10:23:01.172788] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:47.435 [2024-11-19 10:23:01.172837] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:47.435 [2024-11-19 10:23:01.173083] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:47.435 [2024-11-19 10:23:01.173278] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:47.435 [2024-11-19 10:23:01.173318] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:47.435 [2024-11-19 10:23:01.173505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:47.435 10:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.435 10:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:47.435 10:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:47.435 10:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:47.435 10:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.435 10:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.435 10:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.435 10:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.435 10:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.435 10:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.435 10:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.435 10:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.435 10:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:47.435 10:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.435 10:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.435 10:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.695 10:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.695 "name": "raid_bdev1", 00:11:47.695 "uuid": "baf51c65-1288-4c2b-bfb3-e6f3158f0ed4", 00:11:47.695 "strip_size_kb": 0, 00:11:47.695 "state": "online", 00:11:47.695 "raid_level": "raid1", 00:11:47.695 "superblock": true, 00:11:47.695 "num_base_bdevs": 4, 00:11:47.695 "num_base_bdevs_discovered": 4, 00:11:47.695 "num_base_bdevs_operational": 4, 00:11:47.695 "base_bdevs_list": [ 00:11:47.695 { 00:11:47.695 "name": "BaseBdev1", 00:11:47.695 "uuid": "aa6d1c8e-95e1-5544-849c-66eaa233ae6a", 00:11:47.695 "is_configured": true, 00:11:47.695 "data_offset": 2048, 00:11:47.695 "data_size": 63488 00:11:47.695 }, 00:11:47.695 { 00:11:47.695 "name": "BaseBdev2", 00:11:47.695 "uuid": "5de14081-cbeb-5451-9366-4c433349839c", 00:11:47.695 "is_configured": true, 00:11:47.695 "data_offset": 2048, 00:11:47.695 "data_size": 63488 00:11:47.695 }, 00:11:47.695 { 00:11:47.695 "name": "BaseBdev3", 00:11:47.695 "uuid": "1eb3fc28-2671-5bac-9424-c33debffc9a2", 00:11:47.695 "is_configured": true, 00:11:47.695 "data_offset": 2048, 00:11:47.695 "data_size": 63488 00:11:47.695 }, 00:11:47.695 { 00:11:47.695 "name": "BaseBdev4", 00:11:47.695 "uuid": "f6057e6d-a2fa-5e89-ae8a-ee1c839387f5", 00:11:47.695 "is_configured": true, 00:11:47.695 "data_offset": 2048, 00:11:47.695 "data_size": 63488 00:11:47.695 } 00:11:47.695 ] 00:11:47.695 }' 00:11:47.695 10:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.695 10:23:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.955 10:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:47.955 10:23:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:47.955 [2024-11-19 10:23:01.698812] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:48.895 10:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:48.895 10:23:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.895 10:23:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.895 [2024-11-19 10:23:02.614159] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:48.895 [2024-11-19 10:23:02.614217] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:48.895 [2024-11-19 10:23:02.614461] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:11:48.895 10:23:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.895 10:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:48.895 10:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:48.895 10:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:48.895 10:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:48.895 10:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:48.895 10:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.895 10:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.895 10:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.895 10:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.895 10:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.895 10:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.895 10:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.895 10:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.895 10:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.895 10:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.895 10:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.895 10:23:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.895 10:23:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.895 10:23:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.156 10:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.156 "name": "raid_bdev1", 00:11:49.156 "uuid": "baf51c65-1288-4c2b-bfb3-e6f3158f0ed4", 00:11:49.156 "strip_size_kb": 0, 00:11:49.156 "state": "online", 00:11:49.156 "raid_level": "raid1", 00:11:49.156 "superblock": true, 00:11:49.156 "num_base_bdevs": 4, 00:11:49.156 "num_base_bdevs_discovered": 3, 00:11:49.156 "num_base_bdevs_operational": 3, 00:11:49.156 "base_bdevs_list": [ 00:11:49.156 { 00:11:49.156 "name": null, 00:11:49.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.156 "is_configured": false, 00:11:49.156 "data_offset": 0, 00:11:49.156 "data_size": 63488 00:11:49.156 }, 00:11:49.156 { 00:11:49.156 "name": "BaseBdev2", 00:11:49.156 "uuid": "5de14081-cbeb-5451-9366-4c433349839c", 00:11:49.156 "is_configured": true, 00:11:49.156 "data_offset": 2048, 00:11:49.156 "data_size": 63488 00:11:49.156 }, 00:11:49.156 { 00:11:49.156 "name": "BaseBdev3", 00:11:49.156 "uuid": "1eb3fc28-2671-5bac-9424-c33debffc9a2", 00:11:49.156 "is_configured": true, 00:11:49.156 "data_offset": 2048, 00:11:49.156 "data_size": 63488 00:11:49.156 }, 00:11:49.156 { 00:11:49.156 "name": "BaseBdev4", 00:11:49.156 "uuid": "f6057e6d-a2fa-5e89-ae8a-ee1c839387f5", 00:11:49.156 "is_configured": true, 00:11:49.156 "data_offset": 2048, 00:11:49.156 "data_size": 63488 00:11:49.156 } 00:11:49.156 ] 00:11:49.156 }' 00:11:49.156 10:23:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.156 10:23:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.416 10:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:49.416 10:23:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.416 10:23:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.416 [2024-11-19 10:23:03.037199] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:49.416 [2024-11-19 10:23:03.037292] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:49.416 [2024-11-19 10:23:03.039990] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:49.416 [2024-11-19 10:23:03.040090] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:49.416 [2024-11-19 10:23:03.040213] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:49.416 [2024-11-19 10:23:03.040264] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:49.416 10:23:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.416 { 00:11:49.416 "results": [ 00:11:49.416 { 00:11:49.416 "job": "raid_bdev1", 00:11:49.416 "core_mask": "0x1", 00:11:49.416 "workload": "randrw", 00:11:49.416 "percentage": 50, 00:11:49.416 "status": "finished", 00:11:49.416 "queue_depth": 1, 00:11:49.416 "io_size": 131072, 00:11:49.416 "runtime": 1.339225, 00:11:49.416 "iops": 11878.51182586944, 00:11:49.416 "mibps": 1484.81397823368, 00:11:49.416 "io_failed": 0, 00:11:49.416 "io_timeout": 0, 00:11:49.416 "avg_latency_us": 81.588772120918, 00:11:49.416 "min_latency_us": 22.91703056768559, 00:11:49.416 "max_latency_us": 1423.7624454148472 00:11:49.416 } 00:11:49.416 ], 00:11:49.416 "core_count": 1 00:11:49.416 } 00:11:49.416 10:23:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74917 00:11:49.416 10:23:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 74917 ']' 00:11:49.416 10:23:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 74917 00:11:49.416 10:23:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:49.416 10:23:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.416 10:23:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74917 00:11:49.416 10:23:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:49.416 10:23:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:49.416 10:23:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74917' 00:11:49.416 killing process with pid 74917 00:11:49.416 10:23:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 74917 00:11:49.416 [2024-11-19 10:23:03.078683] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:49.416 10:23:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 74917 00:11:49.676 [2024-11-19 10:23:03.396592] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:51.057 10:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.legMNwhMyI 00:11:51.057 10:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:51.057 10:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:51.057 10:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:51.057 10:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:51.057 ************************************ 00:11:51.057 END TEST raid_write_error_test 00:11:51.057 ************************************ 00:11:51.057 10:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:51.057 10:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:51.057 10:23:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:51.057 00:11:51.057 real 0m4.589s 00:11:51.057 user 0m5.415s 00:11:51.057 sys 0m0.558s 00:11:51.057 10:23:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.057 10:23:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.057 10:23:04 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:11:51.057 10:23:04 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:51.057 10:23:04 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:11:51.057 10:23:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:51.057 10:23:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.057 10:23:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:51.057 ************************************ 00:11:51.057 START TEST raid_rebuild_test 00:11:51.057 ************************************ 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75055 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75055 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75055 ']' 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:51.058 10:23:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.058 [2024-11-19 10:23:04.713186] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:11:51.058 [2024-11-19 10:23:04.713400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:11:51.058 Zero copy mechanism will not be used. 00:11:51.058 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75055 ] 00:11:51.317 [2024-11-19 10:23:04.885883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.317 [2024-11-19 10:23:04.996956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.584 [2024-11-19 10:23:05.178488] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.584 [2024-11-19 10:23:05.178543] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.855 10:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.855 10:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:11:51.855 10:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:51.855 10:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:51.855 10:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.855 10:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.855 BaseBdev1_malloc 00:11:51.855 10:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.855 10:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:51.855 10:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.855 10:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.855 [2024-11-19 10:23:05.579366] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:51.855 [2024-11-19 10:23:05.579506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.855 [2024-11-19 10:23:05.579549] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:51.855 [2024-11-19 10:23:05.579584] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.855 [2024-11-19 10:23:05.581602] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.855 [2024-11-19 10:23:05.581677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:51.855 BaseBdev1 00:11:51.855 10:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.855 10:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:51.855 10:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:51.855 10:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.855 10:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.855 BaseBdev2_malloc 00:11:51.856 10:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.856 10:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:51.856 10:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.856 10:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.856 [2024-11-19 10:23:05.633789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:51.856 [2024-11-19 10:23:05.633895] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.856 [2024-11-19 10:23:05.633946] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:51.856 [2024-11-19 10:23:05.633979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.115 [2024-11-19 10:23:05.635977] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.115 [2024-11-19 10:23:05.636060] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:52.115 BaseBdev2 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.115 spare_malloc 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.115 spare_delay 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.115 [2024-11-19 10:23:05.734597] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:52.115 [2024-11-19 10:23:05.734652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.115 [2024-11-19 10:23:05.734687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:52.115 [2024-11-19 10:23:05.734697] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.115 [2024-11-19 10:23:05.736741] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.115 [2024-11-19 10:23:05.736780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:52.115 spare 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.115 [2024-11-19 10:23:05.746625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:52.115 [2024-11-19 10:23:05.748335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:52.115 [2024-11-19 10:23:05.748417] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:52.115 [2024-11-19 10:23:05.748431] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:52.115 [2024-11-19 10:23:05.748661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:52.115 [2024-11-19 10:23:05.748817] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:52.115 [2024-11-19 10:23:05.748828] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:52.115 [2024-11-19 10:23:05.748956] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.115 10:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.115 "name": "raid_bdev1", 00:11:52.115 "uuid": "4d05bc30-0cf6-4fb1-ae93-e9008e97b562", 00:11:52.115 "strip_size_kb": 0, 00:11:52.115 "state": "online", 00:11:52.115 "raid_level": "raid1", 00:11:52.115 "superblock": false, 00:11:52.115 "num_base_bdevs": 2, 00:11:52.115 "num_base_bdevs_discovered": 2, 00:11:52.115 "num_base_bdevs_operational": 2, 00:11:52.115 "base_bdevs_list": [ 00:11:52.115 { 00:11:52.115 "name": "BaseBdev1", 00:11:52.115 "uuid": "f58bf45a-c43a-585d-b9b8-ecbfc8a945e9", 00:11:52.115 "is_configured": true, 00:11:52.115 "data_offset": 0, 00:11:52.115 "data_size": 65536 00:11:52.115 }, 00:11:52.115 { 00:11:52.116 "name": "BaseBdev2", 00:11:52.116 "uuid": "4ff87163-7350-5387-a882-55445c93a089", 00:11:52.116 "is_configured": true, 00:11:52.116 "data_offset": 0, 00:11:52.116 "data_size": 65536 00:11:52.116 } 00:11:52.116 ] 00:11:52.116 }' 00:11:52.116 10:23:05 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.116 10:23:05 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.684 10:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:52.684 10:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:52.684 10:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.684 10:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.684 [2024-11-19 10:23:06.186162] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:52.684 10:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.684 10:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:52.684 10:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.684 10:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.684 10:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.684 10:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:52.684 10:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.684 10:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:52.684 10:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:52.684 10:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:52.684 10:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:52.684 10:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:52.684 10:23:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:52.684 10:23:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:52.684 10:23:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:52.684 10:23:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:52.684 10:23:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:52.684 10:23:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:52.684 10:23:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:52.684 10:23:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:52.684 10:23:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:52.942 [2024-11-19 10:23:06.465437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:52.943 /dev/nbd0 00:11:52.943 10:23:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:52.943 10:23:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:52.943 10:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:52.943 10:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:52.943 10:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:52.943 10:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:52.943 10:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:52.943 10:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:52.943 10:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:52.943 10:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:52.943 10:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:52.943 1+0 records in 00:11:52.943 1+0 records out 00:11:52.943 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532808 s, 7.7 MB/s 00:11:52.943 10:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.943 10:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:52.943 10:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:52.943 10:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:52.943 10:23:06 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:52.943 10:23:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:52.943 10:23:06 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:52.943 10:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:52.943 10:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:52.943 10:23:06 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:57.136 65536+0 records in 00:11:57.136 65536+0 records out 00:11:57.136 33554432 bytes (34 MB, 32 MiB) copied, 3.75473 s, 8.9 MB/s 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:57.136 [2024-11-19 10:23:10.517823] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.136 [2024-11-19 10:23:10.538716] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.136 "name": "raid_bdev1", 00:11:57.136 "uuid": "4d05bc30-0cf6-4fb1-ae93-e9008e97b562", 00:11:57.136 "strip_size_kb": 0, 00:11:57.136 "state": "online", 00:11:57.136 "raid_level": "raid1", 00:11:57.136 "superblock": false, 00:11:57.136 "num_base_bdevs": 2, 00:11:57.136 "num_base_bdevs_discovered": 1, 00:11:57.136 "num_base_bdevs_operational": 1, 00:11:57.136 "base_bdevs_list": [ 00:11:57.136 { 00:11:57.136 "name": null, 00:11:57.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.136 "is_configured": false, 00:11:57.136 "data_offset": 0, 00:11:57.136 "data_size": 65536 00:11:57.136 }, 00:11:57.136 { 00:11:57.136 "name": "BaseBdev2", 00:11:57.136 "uuid": "4ff87163-7350-5387-a882-55445c93a089", 00:11:57.136 "is_configured": true, 00:11:57.136 "data_offset": 0, 00:11:57.136 "data_size": 65536 00:11:57.136 } 00:11:57.136 ] 00:11:57.136 }' 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.136 10:23:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.396 10:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:57.396 10:23:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.396 10:23:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.396 [2024-11-19 10:23:10.981953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:57.396 [2024-11-19 10:23:10.998365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:11:57.396 10:23:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.396 10:23:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:57.396 [2024-11-19 10:23:11.000282] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:58.334 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:58.334 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:58.334 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:58.334 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:58.334 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:58.334 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.334 10:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.334 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.334 10:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.334 10:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.334 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:58.334 "name": "raid_bdev1", 00:11:58.334 "uuid": "4d05bc30-0cf6-4fb1-ae93-e9008e97b562", 00:11:58.334 "strip_size_kb": 0, 00:11:58.334 "state": "online", 00:11:58.334 "raid_level": "raid1", 00:11:58.334 "superblock": false, 00:11:58.334 "num_base_bdevs": 2, 00:11:58.334 "num_base_bdevs_discovered": 2, 00:11:58.334 "num_base_bdevs_operational": 2, 00:11:58.334 "process": { 00:11:58.334 "type": "rebuild", 00:11:58.334 "target": "spare", 00:11:58.334 "progress": { 00:11:58.334 "blocks": 20480, 00:11:58.334 "percent": 31 00:11:58.334 } 00:11:58.334 }, 00:11:58.334 "base_bdevs_list": [ 00:11:58.334 { 00:11:58.334 "name": "spare", 00:11:58.334 "uuid": "062a41d6-161a-5199-a0f9-dd1fa8e3ae3c", 00:11:58.334 "is_configured": true, 00:11:58.334 "data_offset": 0, 00:11:58.334 "data_size": 65536 00:11:58.334 }, 00:11:58.334 { 00:11:58.334 "name": "BaseBdev2", 00:11:58.334 "uuid": "4ff87163-7350-5387-a882-55445c93a089", 00:11:58.334 "is_configured": true, 00:11:58.334 "data_offset": 0, 00:11:58.334 "data_size": 65536 00:11:58.334 } 00:11:58.334 ] 00:11:58.334 }' 00:11:58.334 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:58.334 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:58.334 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:58.594 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:58.594 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:58.594 10:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.594 10:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.594 [2024-11-19 10:23:12.151468] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:58.594 [2024-11-19 10:23:12.205079] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:58.594 [2024-11-19 10:23:12.205168] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.594 [2024-11-19 10:23:12.205183] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:58.594 [2024-11-19 10:23:12.205193] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:58.594 10:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.594 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:58.594 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:58.594 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.594 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.594 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.594 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:58.594 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.594 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.594 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.594 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.594 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.594 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:58.594 10:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.594 10:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.594 10:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.594 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.594 "name": "raid_bdev1", 00:11:58.594 "uuid": "4d05bc30-0cf6-4fb1-ae93-e9008e97b562", 00:11:58.594 "strip_size_kb": 0, 00:11:58.594 "state": "online", 00:11:58.594 "raid_level": "raid1", 00:11:58.594 "superblock": false, 00:11:58.594 "num_base_bdevs": 2, 00:11:58.594 "num_base_bdevs_discovered": 1, 00:11:58.594 "num_base_bdevs_operational": 1, 00:11:58.594 "base_bdevs_list": [ 00:11:58.594 { 00:11:58.594 "name": null, 00:11:58.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.594 "is_configured": false, 00:11:58.594 "data_offset": 0, 00:11:58.594 "data_size": 65536 00:11:58.594 }, 00:11:58.594 { 00:11:58.594 "name": "BaseBdev2", 00:11:58.594 "uuid": "4ff87163-7350-5387-a882-55445c93a089", 00:11:58.594 "is_configured": true, 00:11:58.594 "data_offset": 0, 00:11:58.594 "data_size": 65536 00:11:58.594 } 00:11:58.594 ] 00:11:58.594 }' 00:11:58.594 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.594 10:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.162 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:59.162 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:59.162 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:59.162 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:59.162 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:59.162 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.162 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:59.162 10:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.162 10:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.162 10:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.162 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:59.162 "name": "raid_bdev1", 00:11:59.162 "uuid": "4d05bc30-0cf6-4fb1-ae93-e9008e97b562", 00:11:59.162 "strip_size_kb": 0, 00:11:59.162 "state": "online", 00:11:59.162 "raid_level": "raid1", 00:11:59.162 "superblock": false, 00:11:59.162 "num_base_bdevs": 2, 00:11:59.162 "num_base_bdevs_discovered": 1, 00:11:59.162 "num_base_bdevs_operational": 1, 00:11:59.162 "base_bdevs_list": [ 00:11:59.162 { 00:11:59.162 "name": null, 00:11:59.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.162 "is_configured": false, 00:11:59.162 "data_offset": 0, 00:11:59.162 "data_size": 65536 00:11:59.162 }, 00:11:59.162 { 00:11:59.162 "name": "BaseBdev2", 00:11:59.162 "uuid": "4ff87163-7350-5387-a882-55445c93a089", 00:11:59.162 "is_configured": true, 00:11:59.162 "data_offset": 0, 00:11:59.162 "data_size": 65536 00:11:59.162 } 00:11:59.162 ] 00:11:59.162 }' 00:11:59.162 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:59.162 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:59.162 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:59.162 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:59.162 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:59.162 10:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.162 10:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.162 [2024-11-19 10:23:12.803406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:59.162 [2024-11-19 10:23:12.819191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:11:59.162 10:23:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.162 10:23:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:59.162 [2024-11-19 10:23:12.820913] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:00.100 10:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:00.100 10:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:00.100 10:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:00.100 10:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:00.100 10:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:00.100 10:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.100 10:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.100 10:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.100 10:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.100 10:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.100 10:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:00.100 "name": "raid_bdev1", 00:12:00.100 "uuid": "4d05bc30-0cf6-4fb1-ae93-e9008e97b562", 00:12:00.100 "strip_size_kb": 0, 00:12:00.100 "state": "online", 00:12:00.100 "raid_level": "raid1", 00:12:00.100 "superblock": false, 00:12:00.100 "num_base_bdevs": 2, 00:12:00.100 "num_base_bdevs_discovered": 2, 00:12:00.100 "num_base_bdevs_operational": 2, 00:12:00.100 "process": { 00:12:00.100 "type": "rebuild", 00:12:00.100 "target": "spare", 00:12:00.100 "progress": { 00:12:00.100 "blocks": 20480, 00:12:00.100 "percent": 31 00:12:00.100 } 00:12:00.100 }, 00:12:00.100 "base_bdevs_list": [ 00:12:00.100 { 00:12:00.100 "name": "spare", 00:12:00.100 "uuid": "062a41d6-161a-5199-a0f9-dd1fa8e3ae3c", 00:12:00.100 "is_configured": true, 00:12:00.100 "data_offset": 0, 00:12:00.100 "data_size": 65536 00:12:00.100 }, 00:12:00.100 { 00:12:00.100 "name": "BaseBdev2", 00:12:00.100 "uuid": "4ff87163-7350-5387-a882-55445c93a089", 00:12:00.100 "is_configured": true, 00:12:00.100 "data_offset": 0, 00:12:00.100 "data_size": 65536 00:12:00.100 } 00:12:00.100 ] 00:12:00.100 }' 00:12:00.360 10:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:00.360 10:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:00.360 10:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:00.360 10:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:00.360 10:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:00.360 10:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:00.360 10:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:00.360 10:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:00.360 10:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=358 00:12:00.360 10:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:00.360 10:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:00.360 10:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:00.360 10:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:00.360 10:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:00.360 10:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:00.360 10:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.360 10:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.360 10:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.360 10:23:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:00.360 10:23:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.360 10:23:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:00.360 "name": "raid_bdev1", 00:12:00.360 "uuid": "4d05bc30-0cf6-4fb1-ae93-e9008e97b562", 00:12:00.360 "strip_size_kb": 0, 00:12:00.360 "state": "online", 00:12:00.360 "raid_level": "raid1", 00:12:00.360 "superblock": false, 00:12:00.360 "num_base_bdevs": 2, 00:12:00.360 "num_base_bdevs_discovered": 2, 00:12:00.360 "num_base_bdevs_operational": 2, 00:12:00.360 "process": { 00:12:00.360 "type": "rebuild", 00:12:00.360 "target": "spare", 00:12:00.360 "progress": { 00:12:00.360 "blocks": 22528, 00:12:00.360 "percent": 34 00:12:00.360 } 00:12:00.360 }, 00:12:00.360 "base_bdevs_list": [ 00:12:00.360 { 00:12:00.360 "name": "spare", 00:12:00.360 "uuid": "062a41d6-161a-5199-a0f9-dd1fa8e3ae3c", 00:12:00.360 "is_configured": true, 00:12:00.360 "data_offset": 0, 00:12:00.360 "data_size": 65536 00:12:00.360 }, 00:12:00.360 { 00:12:00.360 "name": "BaseBdev2", 00:12:00.360 "uuid": "4ff87163-7350-5387-a882-55445c93a089", 00:12:00.360 "is_configured": true, 00:12:00.360 "data_offset": 0, 00:12:00.360 "data_size": 65536 00:12:00.360 } 00:12:00.360 ] 00:12:00.360 }' 00:12:00.360 10:23:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:00.360 10:23:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:00.360 10:23:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:00.360 10:23:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:00.360 10:23:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:01.738 10:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:01.738 10:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:01.738 10:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:01.738 10:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:01.738 10:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:01.738 10:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:01.738 10:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.738 10:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:01.738 10:23:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.738 10:23:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.738 10:23:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.738 10:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:01.738 "name": "raid_bdev1", 00:12:01.738 "uuid": "4d05bc30-0cf6-4fb1-ae93-e9008e97b562", 00:12:01.738 "strip_size_kb": 0, 00:12:01.738 "state": "online", 00:12:01.738 "raid_level": "raid1", 00:12:01.738 "superblock": false, 00:12:01.738 "num_base_bdevs": 2, 00:12:01.738 "num_base_bdevs_discovered": 2, 00:12:01.738 "num_base_bdevs_operational": 2, 00:12:01.738 "process": { 00:12:01.738 "type": "rebuild", 00:12:01.738 "target": "spare", 00:12:01.738 "progress": { 00:12:01.738 "blocks": 45056, 00:12:01.739 "percent": 68 00:12:01.739 } 00:12:01.739 }, 00:12:01.739 "base_bdevs_list": [ 00:12:01.739 { 00:12:01.739 "name": "spare", 00:12:01.739 "uuid": "062a41d6-161a-5199-a0f9-dd1fa8e3ae3c", 00:12:01.739 "is_configured": true, 00:12:01.739 "data_offset": 0, 00:12:01.739 "data_size": 65536 00:12:01.739 }, 00:12:01.739 { 00:12:01.739 "name": "BaseBdev2", 00:12:01.739 "uuid": "4ff87163-7350-5387-a882-55445c93a089", 00:12:01.739 "is_configured": true, 00:12:01.739 "data_offset": 0, 00:12:01.739 "data_size": 65536 00:12:01.739 } 00:12:01.739 ] 00:12:01.739 }' 00:12:01.739 10:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:01.739 10:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:01.739 10:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:01.739 10:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:01.739 10:23:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:02.307 [2024-11-19 10:23:16.033488] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:02.307 [2024-11-19 10:23:16.033573] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:02.307 [2024-11-19 10:23:16.033622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.566 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:02.566 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:02.566 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:02.566 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:02.566 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:02.566 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:02.566 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.566 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.566 10:23:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.566 10:23:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.566 10:23:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.566 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:02.566 "name": "raid_bdev1", 00:12:02.566 "uuid": "4d05bc30-0cf6-4fb1-ae93-e9008e97b562", 00:12:02.566 "strip_size_kb": 0, 00:12:02.566 "state": "online", 00:12:02.566 "raid_level": "raid1", 00:12:02.566 "superblock": false, 00:12:02.566 "num_base_bdevs": 2, 00:12:02.566 "num_base_bdevs_discovered": 2, 00:12:02.566 "num_base_bdevs_operational": 2, 00:12:02.566 "base_bdevs_list": [ 00:12:02.566 { 00:12:02.566 "name": "spare", 00:12:02.566 "uuid": "062a41d6-161a-5199-a0f9-dd1fa8e3ae3c", 00:12:02.566 "is_configured": true, 00:12:02.566 "data_offset": 0, 00:12:02.566 "data_size": 65536 00:12:02.566 }, 00:12:02.566 { 00:12:02.566 "name": "BaseBdev2", 00:12:02.566 "uuid": "4ff87163-7350-5387-a882-55445c93a089", 00:12:02.566 "is_configured": true, 00:12:02.566 "data_offset": 0, 00:12:02.566 "data_size": 65536 00:12:02.566 } 00:12:02.566 ] 00:12:02.566 }' 00:12:02.566 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:02.826 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:02.826 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:02.826 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:02.826 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:02.826 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:02.826 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:02.826 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:02.826 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:02.826 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:02.826 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.826 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.826 10:23:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.826 10:23:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.826 10:23:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.826 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:02.826 "name": "raid_bdev1", 00:12:02.826 "uuid": "4d05bc30-0cf6-4fb1-ae93-e9008e97b562", 00:12:02.826 "strip_size_kb": 0, 00:12:02.826 "state": "online", 00:12:02.826 "raid_level": "raid1", 00:12:02.826 "superblock": false, 00:12:02.826 "num_base_bdevs": 2, 00:12:02.827 "num_base_bdevs_discovered": 2, 00:12:02.827 "num_base_bdevs_operational": 2, 00:12:02.827 "base_bdevs_list": [ 00:12:02.827 { 00:12:02.827 "name": "spare", 00:12:02.827 "uuid": "062a41d6-161a-5199-a0f9-dd1fa8e3ae3c", 00:12:02.827 "is_configured": true, 00:12:02.827 "data_offset": 0, 00:12:02.827 "data_size": 65536 00:12:02.827 }, 00:12:02.827 { 00:12:02.827 "name": "BaseBdev2", 00:12:02.827 "uuid": "4ff87163-7350-5387-a882-55445c93a089", 00:12:02.827 "is_configured": true, 00:12:02.827 "data_offset": 0, 00:12:02.827 "data_size": 65536 00:12:02.827 } 00:12:02.827 ] 00:12:02.827 }' 00:12:02.827 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:02.827 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:02.827 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:02.827 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:02.827 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:02.827 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.827 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.827 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.827 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.827 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:02.827 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.827 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.827 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.827 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.827 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.827 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.827 10:23:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.827 10:23:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.827 10:23:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.827 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.827 "name": "raid_bdev1", 00:12:02.827 "uuid": "4d05bc30-0cf6-4fb1-ae93-e9008e97b562", 00:12:02.827 "strip_size_kb": 0, 00:12:02.827 "state": "online", 00:12:02.827 "raid_level": "raid1", 00:12:02.827 "superblock": false, 00:12:02.827 "num_base_bdevs": 2, 00:12:02.827 "num_base_bdevs_discovered": 2, 00:12:02.827 "num_base_bdevs_operational": 2, 00:12:02.827 "base_bdevs_list": [ 00:12:02.827 { 00:12:02.827 "name": "spare", 00:12:02.827 "uuid": "062a41d6-161a-5199-a0f9-dd1fa8e3ae3c", 00:12:02.827 "is_configured": true, 00:12:02.827 "data_offset": 0, 00:12:02.827 "data_size": 65536 00:12:02.827 }, 00:12:02.827 { 00:12:02.827 "name": "BaseBdev2", 00:12:02.827 "uuid": "4ff87163-7350-5387-a882-55445c93a089", 00:12:02.827 "is_configured": true, 00:12:02.827 "data_offset": 0, 00:12:02.827 "data_size": 65536 00:12:02.827 } 00:12:02.827 ] 00:12:02.827 }' 00:12:02.827 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.827 10:23:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.395 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:03.395 10:23:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.395 10:23:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.395 [2024-11-19 10:23:16.974622] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:03.395 [2024-11-19 10:23:16.974713] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:03.395 [2024-11-19 10:23:16.974820] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:03.395 [2024-11-19 10:23:16.974914] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:03.395 [2024-11-19 10:23:16.974961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:03.395 10:23:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.395 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.396 10:23:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.396 10:23:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.396 10:23:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:03.396 10:23:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.396 10:23:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:03.396 10:23:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:03.396 10:23:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:03.396 10:23:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:03.396 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:03.396 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:03.396 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:03.396 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:03.396 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:03.396 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:03.396 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:03.396 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:03.396 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:03.655 /dev/nbd0 00:12:03.655 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:03.655 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:03.655 10:23:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:03.655 10:23:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:03.655 10:23:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:03.655 10:23:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:03.655 10:23:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:03.655 10:23:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:03.655 10:23:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:03.655 10:23:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:03.655 10:23:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:03.655 1+0 records in 00:12:03.655 1+0 records out 00:12:03.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355621 s, 11.5 MB/s 00:12:03.655 10:23:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.655 10:23:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:03.655 10:23:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.655 10:23:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:03.655 10:23:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:03.655 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:03.655 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:03.655 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:03.915 /dev/nbd1 00:12:03.915 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:03.915 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:03.915 10:23:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:03.915 10:23:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:03.915 10:23:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:03.915 10:23:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:03.915 10:23:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:03.915 10:23:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:03.915 10:23:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:03.915 10:23:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:03.915 10:23:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:03.915 1+0 records in 00:12:03.915 1+0 records out 00:12:03.915 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411992 s, 9.9 MB/s 00:12:03.915 10:23:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.915 10:23:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:03.915 10:23:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:03.915 10:23:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:03.915 10:23:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:03.915 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:03.915 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:03.915 10:23:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:03.915 10:23:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:03.915 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:03.915 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:03.915 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:03.915 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:03.915 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:03.915 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:04.174 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:04.174 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:04.174 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:04.174 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:04.174 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:04.174 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:04.174 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:04.174 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:04.174 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:04.174 10:23:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:04.433 10:23:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:04.433 10:23:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:04.433 10:23:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:04.433 10:23:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:04.433 10:23:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:04.433 10:23:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:04.433 10:23:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:04.433 10:23:18 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:04.433 10:23:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:04.433 10:23:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75055 00:12:04.433 10:23:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75055 ']' 00:12:04.433 10:23:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75055 00:12:04.433 10:23:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:04.433 10:23:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:04.433 10:23:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75055 00:12:04.433 killing process with pid 75055 00:12:04.433 Received shutdown signal, test time was about 60.000000 seconds 00:12:04.433 00:12:04.433 Latency(us) 00:12:04.433 [2024-11-19T10:23:18.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:04.433 [2024-11-19T10:23:18.214Z] =================================================================================================================== 00:12:04.433 [2024-11-19T10:23:18.214Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:04.433 10:23:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:04.433 10:23:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:04.433 10:23:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75055' 00:12:04.433 10:23:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75055 00:12:04.433 [2024-11-19 10:23:18.162761] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:04.433 10:23:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75055 00:12:04.694 [2024-11-19 10:23:18.446818] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:06.071 10:23:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:06.071 ************************************ 00:12:06.071 END TEST raid_rebuild_test 00:12:06.071 ************************************ 00:12:06.071 00:12:06.071 real 0m14.869s 00:12:06.071 user 0m17.042s 00:12:06.071 sys 0m2.944s 00:12:06.071 10:23:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:06.071 10:23:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.071 10:23:19 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:06.071 10:23:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:06.071 10:23:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:06.072 10:23:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:06.072 ************************************ 00:12:06.072 START TEST raid_rebuild_test_sb 00:12:06.072 ************************************ 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75474 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75474 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75474 ']' 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:06.072 10:23:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.072 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:06.072 Zero copy mechanism will not be used. 00:12:06.072 [2024-11-19 10:23:19.655673] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:12:06.072 [2024-11-19 10:23:19.655790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75474 ] 00:12:06.072 [2024-11-19 10:23:19.828079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.331 [2024-11-19 10:23:19.938527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.597 [2024-11-19 10:23:20.119822] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:06.597 [2024-11-19 10:23:20.119873] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:06.876 10:23:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:06.876 10:23:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:06.876 10:23:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:06.876 10:23:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:06.876 10:23:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.876 10:23:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.876 BaseBdev1_malloc 00:12:06.876 10:23:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.876 10:23:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:06.876 10:23:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.876 10:23:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.876 [2024-11-19 10:23:20.519171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:06.876 [2024-11-19 10:23:20.519244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.876 [2024-11-19 10:23:20.519267] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:06.876 [2024-11-19 10:23:20.519278] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.876 [2024-11-19 10:23:20.521317] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.876 [2024-11-19 10:23:20.521357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:06.876 BaseBdev1 00:12:06.876 10:23:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.876 10:23:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:06.876 10:23:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:06.876 10:23:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.876 10:23:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.876 BaseBdev2_malloc 00:12:06.876 10:23:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.876 10:23:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:06.876 10:23:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.876 10:23:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.876 [2024-11-19 10:23:20.572534] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:06.876 [2024-11-19 10:23:20.572590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.876 [2024-11-19 10:23:20.572607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:06.876 [2024-11-19 10:23:20.572619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.876 [2024-11-19 10:23:20.574566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.876 [2024-11-19 10:23:20.574605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:06.876 BaseBdev2 00:12:06.876 10:23:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.876 10:23:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:06.876 10:23:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.876 10:23:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.876 spare_malloc 00:12:06.876 10:23:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.876 10:23:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:06.876 10:23:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.876 10:23:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.135 spare_delay 00:12:07.135 10:23:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.135 10:23:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:07.135 10:23:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.135 10:23:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.136 [2024-11-19 10:23:20.671510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:07.136 [2024-11-19 10:23:20.671564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.136 [2024-11-19 10:23:20.671582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:07.136 [2024-11-19 10:23:20.671592] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.136 [2024-11-19 10:23:20.673585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.136 [2024-11-19 10:23:20.673624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:07.136 spare 00:12:07.136 10:23:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.136 10:23:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:07.136 10:23:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.136 10:23:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.136 [2024-11-19 10:23:20.683549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:07.136 [2024-11-19 10:23:20.685254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:07.136 [2024-11-19 10:23:20.685418] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:07.136 [2024-11-19 10:23:20.685435] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:07.136 [2024-11-19 10:23:20.685667] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:07.136 [2024-11-19 10:23:20.685815] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:07.136 [2024-11-19 10:23:20.685824] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:07.136 [2024-11-19 10:23:20.685939] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:07.136 10:23:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.136 10:23:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:07.136 10:23:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.136 10:23:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.136 10:23:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.136 10:23:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.136 10:23:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:07.136 10:23:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.136 10:23:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.136 10:23:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.136 10:23:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.136 10:23:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.136 10:23:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.136 10:23:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.136 10:23:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.136 10:23:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.136 10:23:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.136 "name": "raid_bdev1", 00:12:07.136 "uuid": "4f0d4811-5fde-4c24-bf70-4f55c346e138", 00:12:07.136 "strip_size_kb": 0, 00:12:07.136 "state": "online", 00:12:07.136 "raid_level": "raid1", 00:12:07.136 "superblock": true, 00:12:07.136 "num_base_bdevs": 2, 00:12:07.136 "num_base_bdevs_discovered": 2, 00:12:07.136 "num_base_bdevs_operational": 2, 00:12:07.136 "base_bdevs_list": [ 00:12:07.136 { 00:12:07.136 "name": "BaseBdev1", 00:12:07.136 "uuid": "48d541cf-f92e-555c-9ab9-3cccc9bb3402", 00:12:07.136 "is_configured": true, 00:12:07.136 "data_offset": 2048, 00:12:07.136 "data_size": 63488 00:12:07.136 }, 00:12:07.136 { 00:12:07.136 "name": "BaseBdev2", 00:12:07.136 "uuid": "c19d26f1-d4ec-5da4-9c78-2d1f19b235dc", 00:12:07.136 "is_configured": true, 00:12:07.136 "data_offset": 2048, 00:12:07.136 "data_size": 63488 00:12:07.136 } 00:12:07.136 ] 00:12:07.136 }' 00:12:07.136 10:23:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.136 10:23:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.395 10:23:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:07.396 10:23:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:07.396 10:23:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.396 10:23:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.396 [2024-11-19 10:23:21.107078] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:07.396 10:23:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.396 10:23:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:07.396 10:23:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:07.396 10:23:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.396 10:23:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.396 10:23:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.396 10:23:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.655 10:23:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:07.655 10:23:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:07.655 10:23:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:07.655 10:23:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:07.655 10:23:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:07.655 10:23:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:07.655 10:23:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:07.655 10:23:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:07.655 10:23:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:07.655 10:23:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:07.655 10:23:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:07.655 10:23:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:07.655 10:23:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:07.655 10:23:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:07.655 [2024-11-19 10:23:21.374418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:07.655 /dev/nbd0 00:12:07.655 10:23:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:07.655 10:23:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:07.655 10:23:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:07.655 10:23:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:07.655 10:23:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:07.655 10:23:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:07.655 10:23:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:07.655 10:23:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:07.655 10:23:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:07.655 10:23:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:07.655 10:23:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:07.655 1+0 records in 00:12:07.655 1+0 records out 00:12:07.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308718 s, 13.3 MB/s 00:12:07.915 10:23:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.915 10:23:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:07.915 10:23:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:07.915 10:23:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:07.915 10:23:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:07.915 10:23:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:07.915 10:23:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:07.915 10:23:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:07.915 10:23:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:07.915 10:23:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:12.112 63488+0 records in 00:12:12.112 63488+0 records out 00:12:12.112 32505856 bytes (33 MB, 31 MiB) copied, 3.69192 s, 8.8 MB/s 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:12.112 [2024-11-19 10:23:25.360674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.112 [2024-11-19 10:23:25.378047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.112 "name": "raid_bdev1", 00:12:12.112 "uuid": "4f0d4811-5fde-4c24-bf70-4f55c346e138", 00:12:12.112 "strip_size_kb": 0, 00:12:12.112 "state": "online", 00:12:12.112 "raid_level": "raid1", 00:12:12.112 "superblock": true, 00:12:12.112 "num_base_bdevs": 2, 00:12:12.112 "num_base_bdevs_discovered": 1, 00:12:12.112 "num_base_bdevs_operational": 1, 00:12:12.112 "base_bdevs_list": [ 00:12:12.112 { 00:12:12.112 "name": null, 00:12:12.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.112 "is_configured": false, 00:12:12.112 "data_offset": 0, 00:12:12.112 "data_size": 63488 00:12:12.112 }, 00:12:12.112 { 00:12:12.112 "name": "BaseBdev2", 00:12:12.112 "uuid": "c19d26f1-d4ec-5da4-9c78-2d1f19b235dc", 00:12:12.112 "is_configured": true, 00:12:12.112 "data_offset": 2048, 00:12:12.112 "data_size": 63488 00:12:12.112 } 00:12:12.112 ] 00:12:12.112 }' 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.112 [2024-11-19 10:23:25.841277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:12.112 [2024-11-19 10:23:25.856624] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.112 [2024-11-19 10:23:25.858568] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:12.112 10:23:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:13.492 10:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:13.492 10:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.492 10:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:13.492 10:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:13.492 10:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.492 10:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.492 10:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.492 10:23:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.492 10:23:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.492 10:23:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.492 10:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.492 "name": "raid_bdev1", 00:12:13.492 "uuid": "4f0d4811-5fde-4c24-bf70-4f55c346e138", 00:12:13.492 "strip_size_kb": 0, 00:12:13.492 "state": "online", 00:12:13.492 "raid_level": "raid1", 00:12:13.492 "superblock": true, 00:12:13.492 "num_base_bdevs": 2, 00:12:13.492 "num_base_bdevs_discovered": 2, 00:12:13.492 "num_base_bdevs_operational": 2, 00:12:13.492 "process": { 00:12:13.492 "type": "rebuild", 00:12:13.492 "target": "spare", 00:12:13.492 "progress": { 00:12:13.492 "blocks": 20480, 00:12:13.492 "percent": 32 00:12:13.492 } 00:12:13.492 }, 00:12:13.492 "base_bdevs_list": [ 00:12:13.492 { 00:12:13.492 "name": "spare", 00:12:13.492 "uuid": "57c7328e-9575-5628-9407-74d79a6ada11", 00:12:13.492 "is_configured": true, 00:12:13.492 "data_offset": 2048, 00:12:13.492 "data_size": 63488 00:12:13.492 }, 00:12:13.492 { 00:12:13.492 "name": "BaseBdev2", 00:12:13.492 "uuid": "c19d26f1-d4ec-5da4-9c78-2d1f19b235dc", 00:12:13.492 "is_configured": true, 00:12:13.492 "data_offset": 2048, 00:12:13.492 "data_size": 63488 00:12:13.492 } 00:12:13.492 ] 00:12:13.492 }' 00:12:13.492 10:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.492 10:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:13.492 10:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.492 10:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:13.492 10:23:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:13.492 10:23:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.492 10:23:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.492 [2024-11-19 10:23:27.006964] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:13.492 [2024-11-19 10:23:27.063430] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:13.492 [2024-11-19 10:23:27.063490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.492 [2024-11-19 10:23:27.063505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:13.492 [2024-11-19 10:23:27.063514] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:13.492 10:23:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.492 10:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:13.492 10:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.492 10:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.492 10:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.492 10:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.492 10:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:13.492 10:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.492 10:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.492 10:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.492 10:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.492 10:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.492 10:23:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.492 10:23:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.492 10:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.492 10:23:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.492 10:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.492 "name": "raid_bdev1", 00:12:13.492 "uuid": "4f0d4811-5fde-4c24-bf70-4f55c346e138", 00:12:13.492 "strip_size_kb": 0, 00:12:13.492 "state": "online", 00:12:13.492 "raid_level": "raid1", 00:12:13.492 "superblock": true, 00:12:13.492 "num_base_bdevs": 2, 00:12:13.492 "num_base_bdevs_discovered": 1, 00:12:13.492 "num_base_bdevs_operational": 1, 00:12:13.492 "base_bdevs_list": [ 00:12:13.492 { 00:12:13.492 "name": null, 00:12:13.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.492 "is_configured": false, 00:12:13.492 "data_offset": 0, 00:12:13.492 "data_size": 63488 00:12:13.492 }, 00:12:13.492 { 00:12:13.492 "name": "BaseBdev2", 00:12:13.492 "uuid": "c19d26f1-d4ec-5da4-9c78-2d1f19b235dc", 00:12:13.492 "is_configured": true, 00:12:13.492 "data_offset": 2048, 00:12:13.492 "data_size": 63488 00:12:13.492 } 00:12:13.492 ] 00:12:13.492 }' 00:12:13.492 10:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.492 10:23:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.752 10:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:13.752 10:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.752 10:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:13.752 10:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:13.752 10:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.752 10:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.752 10:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.012 10:23:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.012 10:23:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.012 10:23:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.012 10:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:14.012 "name": "raid_bdev1", 00:12:14.012 "uuid": "4f0d4811-5fde-4c24-bf70-4f55c346e138", 00:12:14.012 "strip_size_kb": 0, 00:12:14.012 "state": "online", 00:12:14.012 "raid_level": "raid1", 00:12:14.012 "superblock": true, 00:12:14.012 "num_base_bdevs": 2, 00:12:14.012 "num_base_bdevs_discovered": 1, 00:12:14.012 "num_base_bdevs_operational": 1, 00:12:14.012 "base_bdevs_list": [ 00:12:14.012 { 00:12:14.012 "name": null, 00:12:14.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.012 "is_configured": false, 00:12:14.012 "data_offset": 0, 00:12:14.012 "data_size": 63488 00:12:14.012 }, 00:12:14.012 { 00:12:14.012 "name": "BaseBdev2", 00:12:14.012 "uuid": "c19d26f1-d4ec-5da4-9c78-2d1f19b235dc", 00:12:14.012 "is_configured": true, 00:12:14.012 "data_offset": 2048, 00:12:14.012 "data_size": 63488 00:12:14.012 } 00:12:14.012 ] 00:12:14.012 }' 00:12:14.012 10:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:14.012 10:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:14.012 10:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:14.012 10:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:14.012 10:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:14.012 10:23:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.012 10:23:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.012 [2024-11-19 10:23:27.664620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:14.012 [2024-11-19 10:23:27.679706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:14.012 10:23:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.012 10:23:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:14.012 [2024-11-19 10:23:27.681558] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:14.951 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:14.951 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:14.951 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:14.951 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:14.951 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:14.951 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.951 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.951 10:23:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.951 10:23:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.951 10:23:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.239 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:15.239 "name": "raid_bdev1", 00:12:15.239 "uuid": "4f0d4811-5fde-4c24-bf70-4f55c346e138", 00:12:15.239 "strip_size_kb": 0, 00:12:15.239 "state": "online", 00:12:15.239 "raid_level": "raid1", 00:12:15.239 "superblock": true, 00:12:15.239 "num_base_bdevs": 2, 00:12:15.239 "num_base_bdevs_discovered": 2, 00:12:15.239 "num_base_bdevs_operational": 2, 00:12:15.239 "process": { 00:12:15.239 "type": "rebuild", 00:12:15.239 "target": "spare", 00:12:15.239 "progress": { 00:12:15.239 "blocks": 20480, 00:12:15.239 "percent": 32 00:12:15.239 } 00:12:15.239 }, 00:12:15.239 "base_bdevs_list": [ 00:12:15.239 { 00:12:15.239 "name": "spare", 00:12:15.239 "uuid": "57c7328e-9575-5628-9407-74d79a6ada11", 00:12:15.239 "is_configured": true, 00:12:15.239 "data_offset": 2048, 00:12:15.239 "data_size": 63488 00:12:15.239 }, 00:12:15.239 { 00:12:15.239 "name": "BaseBdev2", 00:12:15.239 "uuid": "c19d26f1-d4ec-5da4-9c78-2d1f19b235dc", 00:12:15.239 "is_configured": true, 00:12:15.239 "data_offset": 2048, 00:12:15.239 "data_size": 63488 00:12:15.239 } 00:12:15.239 ] 00:12:15.239 }' 00:12:15.239 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:15.239 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:15.239 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:15.239 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:15.239 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:15.239 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:15.239 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:15.239 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:15.239 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:15.239 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:15.239 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=373 00:12:15.239 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:15.239 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:15.239 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:15.239 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:15.239 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:15.239 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:15.239 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.239 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.239 10:23:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.239 10:23:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.239 10:23:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.239 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:15.239 "name": "raid_bdev1", 00:12:15.239 "uuid": "4f0d4811-5fde-4c24-bf70-4f55c346e138", 00:12:15.239 "strip_size_kb": 0, 00:12:15.239 "state": "online", 00:12:15.239 "raid_level": "raid1", 00:12:15.239 "superblock": true, 00:12:15.239 "num_base_bdevs": 2, 00:12:15.239 "num_base_bdevs_discovered": 2, 00:12:15.239 "num_base_bdevs_operational": 2, 00:12:15.239 "process": { 00:12:15.239 "type": "rebuild", 00:12:15.239 "target": "spare", 00:12:15.239 "progress": { 00:12:15.239 "blocks": 22528, 00:12:15.239 "percent": 35 00:12:15.239 } 00:12:15.239 }, 00:12:15.239 "base_bdevs_list": [ 00:12:15.239 { 00:12:15.239 "name": "spare", 00:12:15.239 "uuid": "57c7328e-9575-5628-9407-74d79a6ada11", 00:12:15.239 "is_configured": true, 00:12:15.239 "data_offset": 2048, 00:12:15.239 "data_size": 63488 00:12:15.239 }, 00:12:15.239 { 00:12:15.239 "name": "BaseBdev2", 00:12:15.239 "uuid": "c19d26f1-d4ec-5da4-9c78-2d1f19b235dc", 00:12:15.239 "is_configured": true, 00:12:15.239 "data_offset": 2048, 00:12:15.239 "data_size": 63488 00:12:15.239 } 00:12:15.239 ] 00:12:15.239 }' 00:12:15.239 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:15.239 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:15.239 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:15.239 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:15.239 10:23:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:16.198 10:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:16.198 10:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:16.198 10:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:16.198 10:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:16.198 10:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:16.198 10:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:16.198 10:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.198 10:23:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.198 10:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.198 10:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.458 10:23:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.458 10:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:16.458 "name": "raid_bdev1", 00:12:16.458 "uuid": "4f0d4811-5fde-4c24-bf70-4f55c346e138", 00:12:16.458 "strip_size_kb": 0, 00:12:16.458 "state": "online", 00:12:16.458 "raid_level": "raid1", 00:12:16.458 "superblock": true, 00:12:16.458 "num_base_bdevs": 2, 00:12:16.458 "num_base_bdevs_discovered": 2, 00:12:16.458 "num_base_bdevs_operational": 2, 00:12:16.458 "process": { 00:12:16.458 "type": "rebuild", 00:12:16.458 "target": "spare", 00:12:16.458 "progress": { 00:12:16.458 "blocks": 45056, 00:12:16.458 "percent": 70 00:12:16.458 } 00:12:16.458 }, 00:12:16.458 "base_bdevs_list": [ 00:12:16.458 { 00:12:16.458 "name": "spare", 00:12:16.458 "uuid": "57c7328e-9575-5628-9407-74d79a6ada11", 00:12:16.458 "is_configured": true, 00:12:16.458 "data_offset": 2048, 00:12:16.458 "data_size": 63488 00:12:16.458 }, 00:12:16.458 { 00:12:16.458 "name": "BaseBdev2", 00:12:16.458 "uuid": "c19d26f1-d4ec-5da4-9c78-2d1f19b235dc", 00:12:16.458 "is_configured": true, 00:12:16.458 "data_offset": 2048, 00:12:16.458 "data_size": 63488 00:12:16.458 } 00:12:16.458 ] 00:12:16.458 }' 00:12:16.458 10:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:16.458 10:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:16.458 10:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:16.458 10:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:16.458 10:23:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:17.028 [2024-11-19 10:23:30.793402] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:17.028 [2024-11-19 10:23:30.793537] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:17.028 [2024-11-19 10:23:30.793670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.598 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:17.598 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:17.598 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:17.598 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:17.598 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:17.598 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:17.598 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.598 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.598 10:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.598 10:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.598 10:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.598 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.598 "name": "raid_bdev1", 00:12:17.598 "uuid": "4f0d4811-5fde-4c24-bf70-4f55c346e138", 00:12:17.598 "strip_size_kb": 0, 00:12:17.598 "state": "online", 00:12:17.598 "raid_level": "raid1", 00:12:17.598 "superblock": true, 00:12:17.598 "num_base_bdevs": 2, 00:12:17.598 "num_base_bdevs_discovered": 2, 00:12:17.598 "num_base_bdevs_operational": 2, 00:12:17.598 "base_bdevs_list": [ 00:12:17.598 { 00:12:17.598 "name": "spare", 00:12:17.598 "uuid": "57c7328e-9575-5628-9407-74d79a6ada11", 00:12:17.598 "is_configured": true, 00:12:17.598 "data_offset": 2048, 00:12:17.598 "data_size": 63488 00:12:17.598 }, 00:12:17.598 { 00:12:17.598 "name": "BaseBdev2", 00:12:17.598 "uuid": "c19d26f1-d4ec-5da4-9c78-2d1f19b235dc", 00:12:17.598 "is_configured": true, 00:12:17.598 "data_offset": 2048, 00:12:17.598 "data_size": 63488 00:12:17.598 } 00:12:17.598 ] 00:12:17.598 }' 00:12:17.598 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:17.598 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:17.598 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:17.598 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:17.598 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:17.598 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:17.598 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:17.598 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:17.598 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:17.598 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:17.598 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.599 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.599 10:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.599 10:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.599 10:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.599 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.599 "name": "raid_bdev1", 00:12:17.599 "uuid": "4f0d4811-5fde-4c24-bf70-4f55c346e138", 00:12:17.599 "strip_size_kb": 0, 00:12:17.599 "state": "online", 00:12:17.599 "raid_level": "raid1", 00:12:17.599 "superblock": true, 00:12:17.599 "num_base_bdevs": 2, 00:12:17.599 "num_base_bdevs_discovered": 2, 00:12:17.599 "num_base_bdevs_operational": 2, 00:12:17.599 "base_bdevs_list": [ 00:12:17.599 { 00:12:17.599 "name": "spare", 00:12:17.599 "uuid": "57c7328e-9575-5628-9407-74d79a6ada11", 00:12:17.599 "is_configured": true, 00:12:17.599 "data_offset": 2048, 00:12:17.599 "data_size": 63488 00:12:17.599 }, 00:12:17.599 { 00:12:17.599 "name": "BaseBdev2", 00:12:17.599 "uuid": "c19d26f1-d4ec-5da4-9c78-2d1f19b235dc", 00:12:17.599 "is_configured": true, 00:12:17.599 "data_offset": 2048, 00:12:17.599 "data_size": 63488 00:12:17.599 } 00:12:17.599 ] 00:12:17.599 }' 00:12:17.599 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:17.599 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:17.599 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:17.599 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:17.599 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:17.599 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.599 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.599 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.599 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.599 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:17.599 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.599 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.599 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.599 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.599 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.599 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.599 10:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.599 10:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.858 10:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.858 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.858 "name": "raid_bdev1", 00:12:17.858 "uuid": "4f0d4811-5fde-4c24-bf70-4f55c346e138", 00:12:17.858 "strip_size_kb": 0, 00:12:17.858 "state": "online", 00:12:17.858 "raid_level": "raid1", 00:12:17.858 "superblock": true, 00:12:17.858 "num_base_bdevs": 2, 00:12:17.858 "num_base_bdevs_discovered": 2, 00:12:17.858 "num_base_bdevs_operational": 2, 00:12:17.858 "base_bdevs_list": [ 00:12:17.858 { 00:12:17.858 "name": "spare", 00:12:17.858 "uuid": "57c7328e-9575-5628-9407-74d79a6ada11", 00:12:17.858 "is_configured": true, 00:12:17.858 "data_offset": 2048, 00:12:17.858 "data_size": 63488 00:12:17.858 }, 00:12:17.858 { 00:12:17.858 "name": "BaseBdev2", 00:12:17.858 "uuid": "c19d26f1-d4ec-5da4-9c78-2d1f19b235dc", 00:12:17.858 "is_configured": true, 00:12:17.858 "data_offset": 2048, 00:12:17.858 "data_size": 63488 00:12:17.858 } 00:12:17.858 ] 00:12:17.858 }' 00:12:17.858 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.858 10:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.119 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:18.119 10:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.119 10:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.119 [2024-11-19 10:23:31.787408] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:18.119 [2024-11-19 10:23:31.787443] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:18.119 [2024-11-19 10:23:31.787529] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.119 [2024-11-19 10:23:31.787603] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.119 [2024-11-19 10:23:31.787613] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:18.119 10:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.119 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.119 10:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.119 10:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.119 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:18.119 10:23:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.119 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:18.119 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:18.119 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:18.119 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:18.119 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:18.119 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:18.119 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:18.119 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:18.119 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:18.119 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:18.119 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:18.119 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:18.119 10:23:31 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:18.380 /dev/nbd0 00:12:18.380 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:18.380 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:18.380 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:18.380 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:18.380 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:18.380 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:18.380 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:18.380 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:18.380 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:18.380 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:18.380 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:18.380 1+0 records in 00:12:18.380 1+0 records out 00:12:18.380 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00057049 s, 7.2 MB/s 00:12:18.380 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.380 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:18.380 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.380 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:18.380 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:18.380 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:18.380 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:18.380 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:18.640 /dev/nbd1 00:12:18.640 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:18.640 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:18.640 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:18.640 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:18.640 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:18.641 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:18.641 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:18.641 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:18.641 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:18.641 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:18.641 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:18.641 1+0 records in 00:12:18.641 1+0 records out 00:12:18.641 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248353 s, 16.5 MB/s 00:12:18.641 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.641 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:18.641 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:18.641 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:18.641 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:18.641 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:18.641 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:18.641 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:18.901 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:18.901 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:18.901 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:18.901 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:18.901 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:18.901 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:18.901 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:19.161 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:19.161 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:19.161 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:19.161 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:19.161 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:19.161 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:19.161 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:19.161 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:19.161 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:19.161 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:19.419 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:19.419 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:19.419 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:19.419 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:19.419 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:19.419 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:19.419 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:19.419 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:19.419 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:19.419 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:19.420 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.420 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.420 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.420 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:19.420 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.420 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.420 [2024-11-19 10:23:32.977166] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:19.420 [2024-11-19 10:23:32.977285] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.420 [2024-11-19 10:23:32.977312] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:19.420 [2024-11-19 10:23:32.977321] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.420 [2024-11-19 10:23:32.979380] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.420 [2024-11-19 10:23:32.979418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:19.420 [2024-11-19 10:23:32.979507] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:19.420 [2024-11-19 10:23:32.979554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:19.420 [2024-11-19 10:23:32.979706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:19.420 spare 00:12:19.420 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.420 10:23:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:19.420 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.420 10:23:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.420 [2024-11-19 10:23:33.079600] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:19.420 [2024-11-19 10:23:33.079671] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:19.420 [2024-11-19 10:23:33.079934] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:19.420 [2024-11-19 10:23:33.080125] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:19.420 [2024-11-19 10:23:33.080140] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:19.420 [2024-11-19 10:23:33.080306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.420 10:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.420 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:19.420 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.420 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.420 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.420 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.420 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:19.420 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.420 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.420 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.420 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.420 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.420 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.420 10:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.420 10:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.420 10:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.420 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.420 "name": "raid_bdev1", 00:12:19.420 "uuid": "4f0d4811-5fde-4c24-bf70-4f55c346e138", 00:12:19.420 "strip_size_kb": 0, 00:12:19.420 "state": "online", 00:12:19.420 "raid_level": "raid1", 00:12:19.420 "superblock": true, 00:12:19.420 "num_base_bdevs": 2, 00:12:19.420 "num_base_bdevs_discovered": 2, 00:12:19.420 "num_base_bdevs_operational": 2, 00:12:19.420 "base_bdevs_list": [ 00:12:19.420 { 00:12:19.420 "name": "spare", 00:12:19.420 "uuid": "57c7328e-9575-5628-9407-74d79a6ada11", 00:12:19.420 "is_configured": true, 00:12:19.420 "data_offset": 2048, 00:12:19.420 "data_size": 63488 00:12:19.420 }, 00:12:19.420 { 00:12:19.420 "name": "BaseBdev2", 00:12:19.420 "uuid": "c19d26f1-d4ec-5da4-9c78-2d1f19b235dc", 00:12:19.420 "is_configured": true, 00:12:19.420 "data_offset": 2048, 00:12:19.420 "data_size": 63488 00:12:19.420 } 00:12:19.420 ] 00:12:19.420 }' 00:12:19.420 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.420 10:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:19.988 "name": "raid_bdev1", 00:12:19.988 "uuid": "4f0d4811-5fde-4c24-bf70-4f55c346e138", 00:12:19.988 "strip_size_kb": 0, 00:12:19.988 "state": "online", 00:12:19.988 "raid_level": "raid1", 00:12:19.988 "superblock": true, 00:12:19.988 "num_base_bdevs": 2, 00:12:19.988 "num_base_bdevs_discovered": 2, 00:12:19.988 "num_base_bdevs_operational": 2, 00:12:19.988 "base_bdevs_list": [ 00:12:19.988 { 00:12:19.988 "name": "spare", 00:12:19.988 "uuid": "57c7328e-9575-5628-9407-74d79a6ada11", 00:12:19.988 "is_configured": true, 00:12:19.988 "data_offset": 2048, 00:12:19.988 "data_size": 63488 00:12:19.988 }, 00:12:19.988 { 00:12:19.988 "name": "BaseBdev2", 00:12:19.988 "uuid": "c19d26f1-d4ec-5da4-9c78-2d1f19b235dc", 00:12:19.988 "is_configured": true, 00:12:19.988 "data_offset": 2048, 00:12:19.988 "data_size": 63488 00:12:19.988 } 00:12:19.988 ] 00:12:19.988 }' 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.988 [2024-11-19 10:23:33.699980] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.988 "name": "raid_bdev1", 00:12:19.988 "uuid": "4f0d4811-5fde-4c24-bf70-4f55c346e138", 00:12:19.988 "strip_size_kb": 0, 00:12:19.988 "state": "online", 00:12:19.988 "raid_level": "raid1", 00:12:19.988 "superblock": true, 00:12:19.988 "num_base_bdevs": 2, 00:12:19.988 "num_base_bdevs_discovered": 1, 00:12:19.988 "num_base_bdevs_operational": 1, 00:12:19.988 "base_bdevs_list": [ 00:12:19.988 { 00:12:19.988 "name": null, 00:12:19.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:19.988 "is_configured": false, 00:12:19.988 "data_offset": 0, 00:12:19.988 "data_size": 63488 00:12:19.988 }, 00:12:19.988 { 00:12:19.988 "name": "BaseBdev2", 00:12:19.988 "uuid": "c19d26f1-d4ec-5da4-9c78-2d1f19b235dc", 00:12:19.988 "is_configured": true, 00:12:19.988 "data_offset": 2048, 00:12:19.988 "data_size": 63488 00:12:19.988 } 00:12:19.988 ] 00:12:19.988 }' 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.988 10:23:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.557 10:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:20.557 10:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.557 10:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:20.557 [2024-11-19 10:23:34.163215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:20.557 [2024-11-19 10:23:34.163465] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:20.557 [2024-11-19 10:23:34.163531] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:20.557 [2024-11-19 10:23:34.163595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:20.557 [2024-11-19 10:23:34.179668] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:20.558 10:23:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.558 10:23:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:20.558 [2024-11-19 10:23:34.181491] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:21.496 10:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:21.496 10:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:21.496 10:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:21.496 10:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:21.496 10:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:21.496 10:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.497 10:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.497 10:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.497 10:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.497 10:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.497 10:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:21.497 "name": "raid_bdev1", 00:12:21.497 "uuid": "4f0d4811-5fde-4c24-bf70-4f55c346e138", 00:12:21.497 "strip_size_kb": 0, 00:12:21.497 "state": "online", 00:12:21.497 "raid_level": "raid1", 00:12:21.497 "superblock": true, 00:12:21.497 "num_base_bdevs": 2, 00:12:21.497 "num_base_bdevs_discovered": 2, 00:12:21.497 "num_base_bdevs_operational": 2, 00:12:21.497 "process": { 00:12:21.497 "type": "rebuild", 00:12:21.497 "target": "spare", 00:12:21.497 "progress": { 00:12:21.497 "blocks": 20480, 00:12:21.497 "percent": 32 00:12:21.497 } 00:12:21.497 }, 00:12:21.497 "base_bdevs_list": [ 00:12:21.497 { 00:12:21.497 "name": "spare", 00:12:21.497 "uuid": "57c7328e-9575-5628-9407-74d79a6ada11", 00:12:21.497 "is_configured": true, 00:12:21.497 "data_offset": 2048, 00:12:21.497 "data_size": 63488 00:12:21.497 }, 00:12:21.497 { 00:12:21.497 "name": "BaseBdev2", 00:12:21.497 "uuid": "c19d26f1-d4ec-5da4-9c78-2d1f19b235dc", 00:12:21.497 "is_configured": true, 00:12:21.497 "data_offset": 2048, 00:12:21.497 "data_size": 63488 00:12:21.497 } 00:12:21.497 ] 00:12:21.497 }' 00:12:21.497 10:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:21.757 10:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:21.757 10:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:21.757 10:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:21.757 10:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:21.757 10:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.757 10:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.757 [2024-11-19 10:23:35.333305] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:21.757 [2024-11-19 10:23:35.386219] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:21.757 [2024-11-19 10:23:35.386288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.757 [2024-11-19 10:23:35.386302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:21.757 [2024-11-19 10:23:35.386311] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:21.757 10:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.757 10:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:21.757 10:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:21.757 10:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:21.757 10:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:21.757 10:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:21.757 10:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:21.757 10:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:21.757 10:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:21.757 10:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:21.757 10:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:21.757 10:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.757 10:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.757 10:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.757 10:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.757 10:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.757 10:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:21.757 "name": "raid_bdev1", 00:12:21.757 "uuid": "4f0d4811-5fde-4c24-bf70-4f55c346e138", 00:12:21.757 "strip_size_kb": 0, 00:12:21.757 "state": "online", 00:12:21.757 "raid_level": "raid1", 00:12:21.757 "superblock": true, 00:12:21.757 "num_base_bdevs": 2, 00:12:21.757 "num_base_bdevs_discovered": 1, 00:12:21.757 "num_base_bdevs_operational": 1, 00:12:21.757 "base_bdevs_list": [ 00:12:21.757 { 00:12:21.757 "name": null, 00:12:21.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:21.757 "is_configured": false, 00:12:21.757 "data_offset": 0, 00:12:21.757 "data_size": 63488 00:12:21.757 }, 00:12:21.757 { 00:12:21.757 "name": "BaseBdev2", 00:12:21.757 "uuid": "c19d26f1-d4ec-5da4-9c78-2d1f19b235dc", 00:12:21.757 "is_configured": true, 00:12:21.757 "data_offset": 2048, 00:12:21.757 "data_size": 63488 00:12:21.757 } 00:12:21.757 ] 00:12:21.757 }' 00:12:21.757 10:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:21.757 10:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.325 10:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:22.325 10:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.325 10:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:22.325 [2024-11-19 10:23:35.833181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:22.325 [2024-11-19 10:23:35.833304] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.325 [2024-11-19 10:23:35.833345] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:22.325 [2024-11-19 10:23:35.833377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.325 [2024-11-19 10:23:35.833909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.325 [2024-11-19 10:23:35.833987] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:22.325 [2024-11-19 10:23:35.834137] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:22.325 [2024-11-19 10:23:35.834187] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:22.325 [2024-11-19 10:23:35.834239] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:22.325 [2024-11-19 10:23:35.834338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:22.325 [2024-11-19 10:23:35.849651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:22.325 spare 00:12:22.325 10:23:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.326 [2024-11-19 10:23:35.851590] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:22.326 10:23:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:23.263 10:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:23.263 10:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:23.263 10:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:23.263 10:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:23.263 10:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:23.263 10:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.263 10:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.263 10:23:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.263 10:23:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.263 10:23:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.263 10:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:23.263 "name": "raid_bdev1", 00:12:23.263 "uuid": "4f0d4811-5fde-4c24-bf70-4f55c346e138", 00:12:23.263 "strip_size_kb": 0, 00:12:23.263 "state": "online", 00:12:23.263 "raid_level": "raid1", 00:12:23.263 "superblock": true, 00:12:23.263 "num_base_bdevs": 2, 00:12:23.263 "num_base_bdevs_discovered": 2, 00:12:23.263 "num_base_bdevs_operational": 2, 00:12:23.263 "process": { 00:12:23.263 "type": "rebuild", 00:12:23.263 "target": "spare", 00:12:23.263 "progress": { 00:12:23.263 "blocks": 20480, 00:12:23.263 "percent": 32 00:12:23.263 } 00:12:23.263 }, 00:12:23.263 "base_bdevs_list": [ 00:12:23.263 { 00:12:23.263 "name": "spare", 00:12:23.263 "uuid": "57c7328e-9575-5628-9407-74d79a6ada11", 00:12:23.263 "is_configured": true, 00:12:23.263 "data_offset": 2048, 00:12:23.263 "data_size": 63488 00:12:23.263 }, 00:12:23.263 { 00:12:23.263 "name": "BaseBdev2", 00:12:23.263 "uuid": "c19d26f1-d4ec-5da4-9c78-2d1f19b235dc", 00:12:23.263 "is_configured": true, 00:12:23.263 "data_offset": 2048, 00:12:23.263 "data_size": 63488 00:12:23.263 } 00:12:23.263 ] 00:12:23.263 }' 00:12:23.263 10:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:23.263 10:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:23.263 10:23:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:23.263 10:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:23.263 10:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:23.263 10:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.263 10:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.263 [2024-11-19 10:23:37.015217] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:23.522 [2024-11-19 10:23:37.056546] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:23.522 [2024-11-19 10:23:37.056599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.522 [2024-11-19 10:23:37.056632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:23.522 [2024-11-19 10:23:37.056639] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:23.522 10:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.522 10:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:23.522 10:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.522 10:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:23.522 10:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.522 10:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.522 10:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:23.522 10:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.522 10:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.522 10:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.522 10:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.522 10:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.522 10:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.522 10:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.522 10:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.522 10:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.522 10:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.522 "name": "raid_bdev1", 00:12:23.522 "uuid": "4f0d4811-5fde-4c24-bf70-4f55c346e138", 00:12:23.522 "strip_size_kb": 0, 00:12:23.522 "state": "online", 00:12:23.522 "raid_level": "raid1", 00:12:23.522 "superblock": true, 00:12:23.522 "num_base_bdevs": 2, 00:12:23.522 "num_base_bdevs_discovered": 1, 00:12:23.522 "num_base_bdevs_operational": 1, 00:12:23.522 "base_bdevs_list": [ 00:12:23.522 { 00:12:23.522 "name": null, 00:12:23.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.522 "is_configured": false, 00:12:23.522 "data_offset": 0, 00:12:23.522 "data_size": 63488 00:12:23.522 }, 00:12:23.522 { 00:12:23.522 "name": "BaseBdev2", 00:12:23.522 "uuid": "c19d26f1-d4ec-5da4-9c78-2d1f19b235dc", 00:12:23.522 "is_configured": true, 00:12:23.522 "data_offset": 2048, 00:12:23.522 "data_size": 63488 00:12:23.522 } 00:12:23.522 ] 00:12:23.522 }' 00:12:23.522 10:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.522 10:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:23.781 10:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:23.781 10:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:23.781 10:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:23.781 10:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:23.781 10:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:23.781 10:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.781 10:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.781 10:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.781 10:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.039 10:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.039 10:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:24.039 "name": "raid_bdev1", 00:12:24.039 "uuid": "4f0d4811-5fde-4c24-bf70-4f55c346e138", 00:12:24.039 "strip_size_kb": 0, 00:12:24.039 "state": "online", 00:12:24.039 "raid_level": "raid1", 00:12:24.039 "superblock": true, 00:12:24.039 "num_base_bdevs": 2, 00:12:24.039 "num_base_bdevs_discovered": 1, 00:12:24.039 "num_base_bdevs_operational": 1, 00:12:24.039 "base_bdevs_list": [ 00:12:24.039 { 00:12:24.039 "name": null, 00:12:24.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.039 "is_configured": false, 00:12:24.039 "data_offset": 0, 00:12:24.039 "data_size": 63488 00:12:24.039 }, 00:12:24.039 { 00:12:24.039 "name": "BaseBdev2", 00:12:24.039 "uuid": "c19d26f1-d4ec-5da4-9c78-2d1f19b235dc", 00:12:24.039 "is_configured": true, 00:12:24.039 "data_offset": 2048, 00:12:24.039 "data_size": 63488 00:12:24.039 } 00:12:24.039 ] 00:12:24.039 }' 00:12:24.039 10:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:24.039 10:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:24.039 10:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.039 10:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:24.039 10:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:24.039 10:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.039 10:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.040 10:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.040 10:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:24.040 10:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.040 10:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.040 [2024-11-19 10:23:37.693499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:24.040 [2024-11-19 10:23:37.693554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.040 [2024-11-19 10:23:37.693592] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:24.040 [2024-11-19 10:23:37.693608] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.040 [2024-11-19 10:23:37.694089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.040 [2024-11-19 10:23:37.694125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:24.040 [2024-11-19 10:23:37.694224] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:24.040 [2024-11-19 10:23:37.694253] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:24.040 [2024-11-19 10:23:37.694302] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:24.040 [2024-11-19 10:23:37.694364] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:24.040 BaseBdev1 00:12:24.040 10:23:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.040 10:23:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:24.975 10:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:24.975 10:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.975 10:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.975 10:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.975 10:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.975 10:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:24.975 10:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.975 10:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.975 10:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.975 10:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.975 10:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.975 10:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.975 10:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.975 10:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:24.975 10:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.975 10:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.975 "name": "raid_bdev1", 00:12:24.975 "uuid": "4f0d4811-5fde-4c24-bf70-4f55c346e138", 00:12:24.975 "strip_size_kb": 0, 00:12:24.975 "state": "online", 00:12:24.975 "raid_level": "raid1", 00:12:24.976 "superblock": true, 00:12:24.976 "num_base_bdevs": 2, 00:12:24.976 "num_base_bdevs_discovered": 1, 00:12:24.976 "num_base_bdevs_operational": 1, 00:12:24.976 "base_bdevs_list": [ 00:12:24.976 { 00:12:24.976 "name": null, 00:12:24.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.976 "is_configured": false, 00:12:24.976 "data_offset": 0, 00:12:24.976 "data_size": 63488 00:12:24.976 }, 00:12:24.976 { 00:12:24.976 "name": "BaseBdev2", 00:12:24.976 "uuid": "c19d26f1-d4ec-5da4-9c78-2d1f19b235dc", 00:12:24.976 "is_configured": true, 00:12:24.976 "data_offset": 2048, 00:12:24.976 "data_size": 63488 00:12:24.976 } 00:12:24.976 ] 00:12:24.976 }' 00:12:24.976 10:23:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.976 10:23:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.544 10:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:25.544 10:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.544 10:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:25.544 10:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:25.544 10:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.544 10:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.544 10:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.544 10:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.544 10:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.544 10:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.544 10:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.544 "name": "raid_bdev1", 00:12:25.544 "uuid": "4f0d4811-5fde-4c24-bf70-4f55c346e138", 00:12:25.544 "strip_size_kb": 0, 00:12:25.544 "state": "online", 00:12:25.544 "raid_level": "raid1", 00:12:25.544 "superblock": true, 00:12:25.544 "num_base_bdevs": 2, 00:12:25.544 "num_base_bdevs_discovered": 1, 00:12:25.544 "num_base_bdevs_operational": 1, 00:12:25.544 "base_bdevs_list": [ 00:12:25.544 { 00:12:25.544 "name": null, 00:12:25.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:25.544 "is_configured": false, 00:12:25.544 "data_offset": 0, 00:12:25.544 "data_size": 63488 00:12:25.544 }, 00:12:25.544 { 00:12:25.544 "name": "BaseBdev2", 00:12:25.544 "uuid": "c19d26f1-d4ec-5da4-9c78-2d1f19b235dc", 00:12:25.544 "is_configured": true, 00:12:25.544 "data_offset": 2048, 00:12:25.544 "data_size": 63488 00:12:25.544 } 00:12:25.545 ] 00:12:25.545 }' 00:12:25.545 10:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.545 10:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:25.545 10:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.545 10:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:25.545 10:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:25.545 10:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:12:25.545 10:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:25.545 10:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:25.545 10:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:25.545 10:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:25.545 10:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:25.545 10:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:25.545 10:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.545 10:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:25.545 [2024-11-19 10:23:39.275228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:25.545 [2024-11-19 10:23:39.275395] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:25.545 [2024-11-19 10:23:39.275412] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:25.545 request: 00:12:25.545 { 00:12:25.545 "base_bdev": "BaseBdev1", 00:12:25.545 "raid_bdev": "raid_bdev1", 00:12:25.545 "method": "bdev_raid_add_base_bdev", 00:12:25.545 "req_id": 1 00:12:25.545 } 00:12:25.545 Got JSON-RPC error response 00:12:25.545 response: 00:12:25.545 { 00:12:25.545 "code": -22, 00:12:25.545 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:25.545 } 00:12:25.545 10:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:25.545 10:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:12:25.545 10:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:25.545 10:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:25.545 10:23:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:25.545 10:23:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:26.923 10:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:26.923 10:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.923 10:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.923 10:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.923 10:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.923 10:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:26.923 10:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.923 10:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.923 10:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.923 10:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.923 10:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.923 10:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.923 10:23:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.923 10:23:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:26.923 10:23:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.923 10:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.923 "name": "raid_bdev1", 00:12:26.923 "uuid": "4f0d4811-5fde-4c24-bf70-4f55c346e138", 00:12:26.923 "strip_size_kb": 0, 00:12:26.923 "state": "online", 00:12:26.923 "raid_level": "raid1", 00:12:26.923 "superblock": true, 00:12:26.923 "num_base_bdevs": 2, 00:12:26.923 "num_base_bdevs_discovered": 1, 00:12:26.923 "num_base_bdevs_operational": 1, 00:12:26.923 "base_bdevs_list": [ 00:12:26.923 { 00:12:26.923 "name": null, 00:12:26.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.923 "is_configured": false, 00:12:26.923 "data_offset": 0, 00:12:26.923 "data_size": 63488 00:12:26.923 }, 00:12:26.923 { 00:12:26.923 "name": "BaseBdev2", 00:12:26.923 "uuid": "c19d26f1-d4ec-5da4-9c78-2d1f19b235dc", 00:12:26.923 "is_configured": true, 00:12:26.923 "data_offset": 2048, 00:12:26.923 "data_size": 63488 00:12:26.923 } 00:12:26.923 ] 00:12:26.923 }' 00:12:26.923 10:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.923 10:23:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.183 10:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:27.183 10:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.183 10:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:27.183 10:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:27.183 10:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.183 10:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.183 10:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.183 10:23:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.183 10:23:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:27.183 10:23:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.183 10:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.183 "name": "raid_bdev1", 00:12:27.183 "uuid": "4f0d4811-5fde-4c24-bf70-4f55c346e138", 00:12:27.183 "strip_size_kb": 0, 00:12:27.183 "state": "online", 00:12:27.183 "raid_level": "raid1", 00:12:27.183 "superblock": true, 00:12:27.183 "num_base_bdevs": 2, 00:12:27.183 "num_base_bdevs_discovered": 1, 00:12:27.183 "num_base_bdevs_operational": 1, 00:12:27.183 "base_bdevs_list": [ 00:12:27.183 { 00:12:27.183 "name": null, 00:12:27.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.183 "is_configured": false, 00:12:27.183 "data_offset": 0, 00:12:27.183 "data_size": 63488 00:12:27.183 }, 00:12:27.183 { 00:12:27.183 "name": "BaseBdev2", 00:12:27.183 "uuid": "c19d26f1-d4ec-5da4-9c78-2d1f19b235dc", 00:12:27.183 "is_configured": true, 00:12:27.183 "data_offset": 2048, 00:12:27.183 "data_size": 63488 00:12:27.183 } 00:12:27.183 ] 00:12:27.183 }' 00:12:27.183 10:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.183 10:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:27.183 10:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.183 10:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:27.183 10:23:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75474 00:12:27.183 10:23:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75474 ']' 00:12:27.183 10:23:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75474 00:12:27.183 10:23:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:27.183 10:23:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.183 10:23:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75474 00:12:27.183 10:23:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:27.183 10:23:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:27.183 killing process with pid 75474 00:12:27.183 Received shutdown signal, test time was about 60.000000 seconds 00:12:27.183 00:12:27.183 Latency(us) 00:12:27.183 [2024-11-19T10:23:40.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:27.183 [2024-11-19T10:23:40.964Z] =================================================================================================================== 00:12:27.183 [2024-11-19T10:23:40.964Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:27.183 10:23:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75474' 00:12:27.183 10:23:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75474 00:12:27.183 [2024-11-19 10:23:40.869022] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:27.183 [2024-11-19 10:23:40.869145] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:27.183 [2024-11-19 10:23:40.869193] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:27.183 [2024-11-19 10:23:40.869204] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:27.183 10:23:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75474 00:12:27.443 [2024-11-19 10:23:41.170137] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:28.821 00:12:28.821 real 0m22.675s 00:12:28.821 user 0m27.872s 00:12:28.821 sys 0m3.406s 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.821 ************************************ 00:12:28.821 END TEST raid_rebuild_test_sb 00:12:28.821 ************************************ 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:28.821 10:23:42 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:28.821 10:23:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:28.821 10:23:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.821 10:23:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:28.821 ************************************ 00:12:28.821 START TEST raid_rebuild_test_io 00:12:28.821 ************************************ 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76197 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76197 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76197 ']' 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:28.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:28.821 10:23:42 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.821 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:28.821 Zero copy mechanism will not be used. 00:12:28.821 [2024-11-19 10:23:42.393908] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:12:28.821 [2024-11-19 10:23:42.394042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76197 ] 00:12:28.821 [2024-11-19 10:23:42.549654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.081 [2024-11-19 10:23:42.664144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.340 [2024-11-19 10:23:42.869296] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.340 [2024-11-19 10:23:42.869368] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.600 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:29.600 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:12:29.600 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:29.600 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:29.600 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.600 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.600 BaseBdev1_malloc 00:12:29.600 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.600 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:29.600 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.600 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.600 [2024-11-19 10:23:43.260973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:29.600 [2024-11-19 10:23:43.261046] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.600 [2024-11-19 10:23:43.261070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:29.600 [2024-11-19 10:23:43.261081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.600 [2024-11-19 10:23:43.263117] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.600 [2024-11-19 10:23:43.263148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:29.600 BaseBdev1 00:12:29.600 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.600 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:29.600 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:29.600 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.600 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.600 BaseBdev2_malloc 00:12:29.600 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.600 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:29.600 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.600 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.600 [2024-11-19 10:23:43.315480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:29.600 [2024-11-19 10:23:43.315536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.600 [2024-11-19 10:23:43.315554] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:29.600 [2024-11-19 10:23:43.315565] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.600 [2024-11-19 10:23:43.317581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.600 [2024-11-19 10:23:43.317617] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:29.600 BaseBdev2 00:12:29.600 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.600 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:29.600 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.600 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.859 spare_malloc 00:12:29.859 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.859 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:29.859 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.859 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.859 spare_delay 00:12:29.859 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.859 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:29.859 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.859 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.859 [2024-11-19 10:23:43.405354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:29.859 [2024-11-19 10:23:43.405414] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.859 [2024-11-19 10:23:43.405435] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:29.859 [2024-11-19 10:23:43.405447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.859 [2024-11-19 10:23:43.407713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.859 [2024-11-19 10:23:43.407752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:29.859 spare 00:12:29.859 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.859 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:29.859 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.859 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.859 [2024-11-19 10:23:43.417391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:29.859 [2024-11-19 10:23:43.419206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:29.859 [2024-11-19 10:23:43.419302] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:29.859 [2024-11-19 10:23:43.419317] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:29.859 [2024-11-19 10:23:43.419554] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:29.859 [2024-11-19 10:23:43.419710] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:29.859 [2024-11-19 10:23:43.419728] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:29.859 [2024-11-19 10:23:43.419876] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.859 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.859 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:29.859 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.859 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.859 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.859 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.859 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:29.859 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.859 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.859 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.859 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.859 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.859 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.859 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.859 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.859 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.859 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.860 "name": "raid_bdev1", 00:12:29.860 "uuid": "982822a9-3321-40a6-84fa-c516b24c79d3", 00:12:29.860 "strip_size_kb": 0, 00:12:29.860 "state": "online", 00:12:29.860 "raid_level": "raid1", 00:12:29.860 "superblock": false, 00:12:29.860 "num_base_bdevs": 2, 00:12:29.860 "num_base_bdevs_discovered": 2, 00:12:29.860 "num_base_bdevs_operational": 2, 00:12:29.860 "base_bdevs_list": [ 00:12:29.860 { 00:12:29.860 "name": "BaseBdev1", 00:12:29.860 "uuid": "2a80862d-c372-5d94-a175-5076bb466f7a", 00:12:29.860 "is_configured": true, 00:12:29.860 "data_offset": 0, 00:12:29.860 "data_size": 65536 00:12:29.860 }, 00:12:29.860 { 00:12:29.860 "name": "BaseBdev2", 00:12:29.860 "uuid": "c05e759d-a9c3-55eb-8dcf-53b1870a47dd", 00:12:29.860 "is_configured": true, 00:12:29.860 "data_offset": 0, 00:12:29.860 "data_size": 65536 00:12:29.860 } 00:12:29.860 ] 00:12:29.860 }' 00:12:29.860 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.860 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.119 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:30.119 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:30.119 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.119 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.119 [2024-11-19 10:23:43.812969] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:30.119 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.119 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:30.119 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.119 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:30.119 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.119 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.119 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.119 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:30.119 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:30.119 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:30.119 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:30.120 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.120 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.120 [2024-11-19 10:23:43.876600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:30.120 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.120 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:30.120 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.120 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.120 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.120 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.120 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:30.120 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.120 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.120 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.120 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.120 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.120 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.120 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.120 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.379 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.379 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.379 "name": "raid_bdev1", 00:12:30.379 "uuid": "982822a9-3321-40a6-84fa-c516b24c79d3", 00:12:30.379 "strip_size_kb": 0, 00:12:30.379 "state": "online", 00:12:30.379 "raid_level": "raid1", 00:12:30.379 "superblock": false, 00:12:30.379 "num_base_bdevs": 2, 00:12:30.379 "num_base_bdevs_discovered": 1, 00:12:30.379 "num_base_bdevs_operational": 1, 00:12:30.379 "base_bdevs_list": [ 00:12:30.379 { 00:12:30.379 "name": null, 00:12:30.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.379 "is_configured": false, 00:12:30.379 "data_offset": 0, 00:12:30.379 "data_size": 65536 00:12:30.379 }, 00:12:30.379 { 00:12:30.379 "name": "BaseBdev2", 00:12:30.379 "uuid": "c05e759d-a9c3-55eb-8dcf-53b1870a47dd", 00:12:30.379 "is_configured": true, 00:12:30.379 "data_offset": 0, 00:12:30.379 "data_size": 65536 00:12:30.379 } 00:12:30.379 ] 00:12:30.379 }' 00:12:30.379 10:23:43 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.379 10:23:43 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.379 [2024-11-19 10:23:43.964631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:30.379 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:30.379 Zero copy mechanism will not be used. 00:12:30.379 Running I/O for 60 seconds... 00:12:30.638 10:23:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:30.638 10:23:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.638 10:23:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:30.638 [2024-11-19 10:23:44.311018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:30.638 10:23:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.638 10:23:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:30.638 [2024-11-19 10:23:44.365321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:30.638 [2024-11-19 10:23:44.367225] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:30.897 [2024-11-19 10:23:44.475223] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:30.897 [2024-11-19 10:23:44.475745] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:31.157 [2024-11-19 10:23:44.707360] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:31.157 [2024-11-19 10:23:44.707627] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:31.417 169.00 IOPS, 507.00 MiB/s [2024-11-19T10:23:45.198Z] [2024-11-19 10:23:45.183318] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:31.417 [2024-11-19 10:23:45.183682] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:31.676 10:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:31.676 10:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.676 10:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:31.676 10:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:31.676 10:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.676 10:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.676 10:23:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.676 10:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.676 10:23:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.676 10:23:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.676 10:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.676 "name": "raid_bdev1", 00:12:31.676 "uuid": "982822a9-3321-40a6-84fa-c516b24c79d3", 00:12:31.676 "strip_size_kb": 0, 00:12:31.676 "state": "online", 00:12:31.676 "raid_level": "raid1", 00:12:31.676 "superblock": false, 00:12:31.676 "num_base_bdevs": 2, 00:12:31.676 "num_base_bdevs_discovered": 2, 00:12:31.676 "num_base_bdevs_operational": 2, 00:12:31.676 "process": { 00:12:31.676 "type": "rebuild", 00:12:31.676 "target": "spare", 00:12:31.676 "progress": { 00:12:31.676 "blocks": 10240, 00:12:31.676 "percent": 15 00:12:31.676 } 00:12:31.676 }, 00:12:31.676 "base_bdevs_list": [ 00:12:31.676 { 00:12:31.676 "name": "spare", 00:12:31.676 "uuid": "a493b9ec-4066-58ae-9e69-f4f3ac04a223", 00:12:31.676 "is_configured": true, 00:12:31.676 "data_offset": 0, 00:12:31.676 "data_size": 65536 00:12:31.676 }, 00:12:31.676 { 00:12:31.676 "name": "BaseBdev2", 00:12:31.676 "uuid": "c05e759d-a9c3-55eb-8dcf-53b1870a47dd", 00:12:31.676 "is_configured": true, 00:12:31.676 "data_offset": 0, 00:12:31.676 "data_size": 65536 00:12:31.676 } 00:12:31.676 ] 00:12:31.676 }' 00:12:31.676 10:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.676 10:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:31.676 10:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.936 10:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:31.936 10:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:31.936 10:23:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.936 10:23:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:31.936 [2024-11-19 10:23:45.509182] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:31.936 [2024-11-19 10:23:45.623354] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:31.936 [2024-11-19 10:23:45.631880] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.936 [2024-11-19 10:23:45.631919] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:31.936 [2024-11-19 10:23:45.631934] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:31.936 [2024-11-19 10:23:45.675147] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:31.936 10:23:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.936 10:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:31.936 10:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.936 10:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.936 10:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.936 10:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.936 10:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:31.936 10:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.936 10:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.936 10:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.936 10:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.936 10:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.936 10:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.936 10:23:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.936 10:23:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.195 10:23:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.195 10:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:32.195 "name": "raid_bdev1", 00:12:32.195 "uuid": "982822a9-3321-40a6-84fa-c516b24c79d3", 00:12:32.195 "strip_size_kb": 0, 00:12:32.195 "state": "online", 00:12:32.195 "raid_level": "raid1", 00:12:32.195 "superblock": false, 00:12:32.195 "num_base_bdevs": 2, 00:12:32.195 "num_base_bdevs_discovered": 1, 00:12:32.195 "num_base_bdevs_operational": 1, 00:12:32.195 "base_bdevs_list": [ 00:12:32.195 { 00:12:32.195 "name": null, 00:12:32.195 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.195 "is_configured": false, 00:12:32.195 "data_offset": 0, 00:12:32.195 "data_size": 65536 00:12:32.195 }, 00:12:32.195 { 00:12:32.195 "name": "BaseBdev2", 00:12:32.195 "uuid": "c05e759d-a9c3-55eb-8dcf-53b1870a47dd", 00:12:32.195 "is_configured": true, 00:12:32.195 "data_offset": 0, 00:12:32.195 "data_size": 65536 00:12:32.195 } 00:12:32.195 ] 00:12:32.195 }' 00:12:32.195 10:23:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:32.195 10:23:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.454 160.00 IOPS, 480.00 MiB/s [2024-11-19T10:23:46.235Z] 10:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:32.454 10:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.454 10:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:32.454 10:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:32.454 10:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.454 10:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.454 10:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.454 10:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.454 10:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.454 10:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.454 10:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.454 "name": "raid_bdev1", 00:12:32.454 "uuid": "982822a9-3321-40a6-84fa-c516b24c79d3", 00:12:32.454 "strip_size_kb": 0, 00:12:32.454 "state": "online", 00:12:32.454 "raid_level": "raid1", 00:12:32.454 "superblock": false, 00:12:32.454 "num_base_bdevs": 2, 00:12:32.454 "num_base_bdevs_discovered": 1, 00:12:32.454 "num_base_bdevs_operational": 1, 00:12:32.454 "base_bdevs_list": [ 00:12:32.454 { 00:12:32.454 "name": null, 00:12:32.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.454 "is_configured": false, 00:12:32.454 "data_offset": 0, 00:12:32.454 "data_size": 65536 00:12:32.454 }, 00:12:32.454 { 00:12:32.454 "name": "BaseBdev2", 00:12:32.454 "uuid": "c05e759d-a9c3-55eb-8dcf-53b1870a47dd", 00:12:32.454 "is_configured": true, 00:12:32.454 "data_offset": 0, 00:12:32.454 "data_size": 65536 00:12:32.454 } 00:12:32.454 ] 00:12:32.454 }' 00:12:32.454 10:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.714 10:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:32.714 10:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:32.714 10:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:32.714 10:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:32.714 10:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.714 10:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.714 [2024-11-19 10:23:46.271970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:32.714 10:23:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.714 10:23:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:32.714 [2024-11-19 10:23:46.335548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:32.714 [2024-11-19 10:23:46.337366] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:32.714 [2024-11-19 10:23:46.449777] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:32.714 [2024-11-19 10:23:46.450269] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:32.973 [2024-11-19 10:23:46.658771] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:32.973 [2024-11-19 10:23:46.658985] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:33.233 163.33 IOPS, 490.00 MiB/s [2024-11-19T10:23:47.014Z] [2024-11-19 10:23:46.977114] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:33.492 [2024-11-19 10:23:47.211352] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:33.493 [2024-11-19 10:23:47.216840] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:33.752 10:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:33.752 10:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.752 10:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:33.752 10:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:33.752 10:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.752 10:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.752 10:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.752 10:23:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.752 10:23:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.752 10:23:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.752 10:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.752 "name": "raid_bdev1", 00:12:33.752 "uuid": "982822a9-3321-40a6-84fa-c516b24c79d3", 00:12:33.752 "strip_size_kb": 0, 00:12:33.752 "state": "online", 00:12:33.752 "raid_level": "raid1", 00:12:33.752 "superblock": false, 00:12:33.752 "num_base_bdevs": 2, 00:12:33.752 "num_base_bdevs_discovered": 2, 00:12:33.752 "num_base_bdevs_operational": 2, 00:12:33.752 "process": { 00:12:33.752 "type": "rebuild", 00:12:33.752 "target": "spare", 00:12:33.752 "progress": { 00:12:33.752 "blocks": 10240, 00:12:33.752 "percent": 15 00:12:33.752 } 00:12:33.752 }, 00:12:33.752 "base_bdevs_list": [ 00:12:33.753 { 00:12:33.753 "name": "spare", 00:12:33.753 "uuid": "a493b9ec-4066-58ae-9e69-f4f3ac04a223", 00:12:33.753 "is_configured": true, 00:12:33.753 "data_offset": 0, 00:12:33.753 "data_size": 65536 00:12:33.753 }, 00:12:33.753 { 00:12:33.753 "name": "BaseBdev2", 00:12:33.753 "uuid": "c05e759d-a9c3-55eb-8dcf-53b1870a47dd", 00:12:33.753 "is_configured": true, 00:12:33.753 "data_offset": 0, 00:12:33.753 "data_size": 65536 00:12:33.753 } 00:12:33.753 ] 00:12:33.753 }' 00:12:33.753 10:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.753 10:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:33.753 10:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.753 10:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:33.753 10:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:33.753 10:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:33.753 10:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:33.753 10:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:33.753 10:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=392 00:12:33.753 10:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:33.753 10:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:33.753 10:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.753 10:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:33.753 10:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:33.753 10:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.753 10:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.753 10:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.753 10:23:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.753 10:23:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.753 10:23:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.753 10:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.753 "name": "raid_bdev1", 00:12:33.753 "uuid": "982822a9-3321-40a6-84fa-c516b24c79d3", 00:12:33.753 "strip_size_kb": 0, 00:12:33.753 "state": "online", 00:12:33.753 "raid_level": "raid1", 00:12:33.753 "superblock": false, 00:12:33.753 "num_base_bdevs": 2, 00:12:33.753 "num_base_bdevs_discovered": 2, 00:12:33.753 "num_base_bdevs_operational": 2, 00:12:33.753 "process": { 00:12:33.753 "type": "rebuild", 00:12:33.753 "target": "spare", 00:12:33.753 "progress": { 00:12:33.753 "blocks": 12288, 00:12:33.753 "percent": 18 00:12:33.753 } 00:12:33.753 }, 00:12:33.753 "base_bdevs_list": [ 00:12:33.753 { 00:12:33.753 "name": "spare", 00:12:33.753 "uuid": "a493b9ec-4066-58ae-9e69-f4f3ac04a223", 00:12:33.753 "is_configured": true, 00:12:33.753 "data_offset": 0, 00:12:33.753 "data_size": 65536 00:12:33.753 }, 00:12:33.753 { 00:12:33.753 "name": "BaseBdev2", 00:12:33.753 "uuid": "c05e759d-a9c3-55eb-8dcf-53b1870a47dd", 00:12:33.753 "is_configured": true, 00:12:33.753 "data_offset": 0, 00:12:33.753 "data_size": 65536 00:12:33.753 } 00:12:33.753 ] 00:12:33.753 }' 00:12:33.753 10:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.013 [2024-11-19 10:23:47.533200] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:34.013 [2024-11-19 10:23:47.533718] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:34.013 10:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:34.013 10:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.013 10:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:34.013 10:23:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:34.013 [2024-11-19 10:23:47.750570] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:34.531 144.75 IOPS, 434.25 MiB/s [2024-11-19T10:23:48.312Z] [2024-11-19 10:23:48.104022] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:34.810 [2024-11-19 10:23:48.334816] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:35.069 10:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:35.069 10:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:35.069 10:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.069 10:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:35.069 10:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:35.069 10:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.069 10:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.069 10:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.069 10:23:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.069 10:23:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.069 10:23:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.069 10:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.069 "name": "raid_bdev1", 00:12:35.069 "uuid": "982822a9-3321-40a6-84fa-c516b24c79d3", 00:12:35.069 "strip_size_kb": 0, 00:12:35.069 "state": "online", 00:12:35.069 "raid_level": "raid1", 00:12:35.069 "superblock": false, 00:12:35.069 "num_base_bdevs": 2, 00:12:35.069 "num_base_bdevs_discovered": 2, 00:12:35.069 "num_base_bdevs_operational": 2, 00:12:35.069 "process": { 00:12:35.069 "type": "rebuild", 00:12:35.069 "target": "spare", 00:12:35.069 "progress": { 00:12:35.069 "blocks": 28672, 00:12:35.069 "percent": 43 00:12:35.069 } 00:12:35.069 }, 00:12:35.069 "base_bdevs_list": [ 00:12:35.069 { 00:12:35.069 "name": "spare", 00:12:35.069 "uuid": "a493b9ec-4066-58ae-9e69-f4f3ac04a223", 00:12:35.069 "is_configured": true, 00:12:35.069 "data_offset": 0, 00:12:35.069 "data_size": 65536 00:12:35.069 }, 00:12:35.069 { 00:12:35.069 "name": "BaseBdev2", 00:12:35.069 "uuid": "c05e759d-a9c3-55eb-8dcf-53b1870a47dd", 00:12:35.069 "is_configured": true, 00:12:35.069 "data_offset": 0, 00:12:35.069 "data_size": 65536 00:12:35.069 } 00:12:35.069 ] 00:12:35.069 }' 00:12:35.069 10:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.069 10:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:35.069 10:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.069 10:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:35.069 10:23:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:35.069 [2024-11-19 10:23:48.826264] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:35.897 126.00 IOPS, 378.00 MiB/s [2024-11-19T10:23:49.678Z] [2024-11-19 10:23:49.404525] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:36.157 10:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:36.157 10:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:36.157 10:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.157 10:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:36.157 10:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:36.157 10:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.157 10:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.157 10:23:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.157 10:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.157 10:23:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.157 10:23:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.157 10:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.157 "name": "raid_bdev1", 00:12:36.157 "uuid": "982822a9-3321-40a6-84fa-c516b24c79d3", 00:12:36.157 "strip_size_kb": 0, 00:12:36.157 "state": "online", 00:12:36.157 "raid_level": "raid1", 00:12:36.157 "superblock": false, 00:12:36.157 "num_base_bdevs": 2, 00:12:36.157 "num_base_bdevs_discovered": 2, 00:12:36.157 "num_base_bdevs_operational": 2, 00:12:36.157 "process": { 00:12:36.157 "type": "rebuild", 00:12:36.157 "target": "spare", 00:12:36.157 "progress": { 00:12:36.157 "blocks": 45056, 00:12:36.157 "percent": 68 00:12:36.157 } 00:12:36.157 }, 00:12:36.157 "base_bdevs_list": [ 00:12:36.157 { 00:12:36.157 "name": "spare", 00:12:36.157 "uuid": "a493b9ec-4066-58ae-9e69-f4f3ac04a223", 00:12:36.157 "is_configured": true, 00:12:36.157 "data_offset": 0, 00:12:36.157 "data_size": 65536 00:12:36.157 }, 00:12:36.157 { 00:12:36.157 "name": "BaseBdev2", 00:12:36.157 "uuid": "c05e759d-a9c3-55eb-8dcf-53b1870a47dd", 00:12:36.157 "is_configured": true, 00:12:36.157 "data_offset": 0, 00:12:36.157 "data_size": 65536 00:12:36.157 } 00:12:36.157 ] 00:12:36.157 }' 00:12:36.157 10:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.157 [2024-11-19 10:23:49.828548] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:36.157 10:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:36.157 10:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.157 10:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:36.157 10:23:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:36.417 111.67 IOPS, 335.00 MiB/s [2024-11-19T10:23:50.198Z] [2024-11-19 10:23:50.156574] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:12:36.986 [2024-11-19 10:23:50.477657] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:36.986 [2024-11-19 10:23:50.585634] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:37.246 10:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:37.246 10:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:37.246 10:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:37.246 10:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:37.246 10:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:37.246 10:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.246 10:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.246 10:23:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.246 10:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.246 10:23:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.246 10:23:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.246 [2024-11-19 10:23:50.913180] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:37.246 10:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.246 "name": "raid_bdev1", 00:12:37.246 "uuid": "982822a9-3321-40a6-84fa-c516b24c79d3", 00:12:37.246 "strip_size_kb": 0, 00:12:37.246 "state": "online", 00:12:37.246 "raid_level": "raid1", 00:12:37.246 "superblock": false, 00:12:37.246 "num_base_bdevs": 2, 00:12:37.246 "num_base_bdevs_discovered": 2, 00:12:37.246 "num_base_bdevs_operational": 2, 00:12:37.246 "process": { 00:12:37.246 "type": "rebuild", 00:12:37.246 "target": "spare", 00:12:37.246 "progress": { 00:12:37.246 "blocks": 63488, 00:12:37.246 "percent": 96 00:12:37.246 } 00:12:37.246 }, 00:12:37.246 "base_bdevs_list": [ 00:12:37.246 { 00:12:37.246 "name": "spare", 00:12:37.246 "uuid": "a493b9ec-4066-58ae-9e69-f4f3ac04a223", 00:12:37.246 "is_configured": true, 00:12:37.246 "data_offset": 0, 00:12:37.246 "data_size": 65536 00:12:37.246 }, 00:12:37.246 { 00:12:37.246 "name": "BaseBdev2", 00:12:37.246 "uuid": "c05e759d-a9c3-55eb-8dcf-53b1870a47dd", 00:12:37.246 "is_configured": true, 00:12:37.246 "data_offset": 0, 00:12:37.246 "data_size": 65536 00:12:37.246 } 00:12:37.246 ] 00:12:37.246 }' 00:12:37.246 10:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.246 101.14 IOPS, 303.43 MiB/s [2024-11-19T10:23:51.027Z] 10:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:37.246 10:23:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.247 [2024-11-19 10:23:51.012964] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:37.247 [2024-11-19 10:23:51.014706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.247 10:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:37.247 10:23:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:38.445 91.88 IOPS, 275.62 MiB/s [2024-11-19T10:23:52.226Z] 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:38.445 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:38.445 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.445 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:38.445 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:38.445 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.445 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.445 10:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.445 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.445 10:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.445 10:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.445 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.446 "name": "raid_bdev1", 00:12:38.446 "uuid": "982822a9-3321-40a6-84fa-c516b24c79d3", 00:12:38.446 "strip_size_kb": 0, 00:12:38.446 "state": "online", 00:12:38.446 "raid_level": "raid1", 00:12:38.446 "superblock": false, 00:12:38.446 "num_base_bdevs": 2, 00:12:38.446 "num_base_bdevs_discovered": 2, 00:12:38.446 "num_base_bdevs_operational": 2, 00:12:38.446 "base_bdevs_list": [ 00:12:38.446 { 00:12:38.446 "name": "spare", 00:12:38.446 "uuid": "a493b9ec-4066-58ae-9e69-f4f3ac04a223", 00:12:38.446 "is_configured": true, 00:12:38.446 "data_offset": 0, 00:12:38.446 "data_size": 65536 00:12:38.446 }, 00:12:38.446 { 00:12:38.446 "name": "BaseBdev2", 00:12:38.446 "uuid": "c05e759d-a9c3-55eb-8dcf-53b1870a47dd", 00:12:38.446 "is_configured": true, 00:12:38.446 "data_offset": 0, 00:12:38.446 "data_size": 65536 00:12:38.446 } 00:12:38.446 ] 00:12:38.446 }' 00:12:38.446 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.446 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:38.446 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.446 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:38.446 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:38.446 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:38.446 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.446 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:38.446 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:38.446 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.446 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.446 10:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.446 10:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.446 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.446 10:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.446 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.446 "name": "raid_bdev1", 00:12:38.446 "uuid": "982822a9-3321-40a6-84fa-c516b24c79d3", 00:12:38.446 "strip_size_kb": 0, 00:12:38.446 "state": "online", 00:12:38.446 "raid_level": "raid1", 00:12:38.446 "superblock": false, 00:12:38.446 "num_base_bdevs": 2, 00:12:38.446 "num_base_bdevs_discovered": 2, 00:12:38.446 "num_base_bdevs_operational": 2, 00:12:38.446 "base_bdevs_list": [ 00:12:38.446 { 00:12:38.446 "name": "spare", 00:12:38.446 "uuid": "a493b9ec-4066-58ae-9e69-f4f3ac04a223", 00:12:38.446 "is_configured": true, 00:12:38.446 "data_offset": 0, 00:12:38.446 "data_size": 65536 00:12:38.446 }, 00:12:38.446 { 00:12:38.446 "name": "BaseBdev2", 00:12:38.446 "uuid": "c05e759d-a9c3-55eb-8dcf-53b1870a47dd", 00:12:38.446 "is_configured": true, 00:12:38.446 "data_offset": 0, 00:12:38.446 "data_size": 65536 00:12:38.446 } 00:12:38.446 ] 00:12:38.446 }' 00:12:38.446 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.706 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:38.706 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.706 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:38.706 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:38.706 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:38.706 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.706 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.706 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.706 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:38.706 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.706 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.706 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.706 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.706 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.706 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.706 10:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.706 10:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.706 10:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.706 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.706 "name": "raid_bdev1", 00:12:38.706 "uuid": "982822a9-3321-40a6-84fa-c516b24c79d3", 00:12:38.706 "strip_size_kb": 0, 00:12:38.706 "state": "online", 00:12:38.706 "raid_level": "raid1", 00:12:38.706 "superblock": false, 00:12:38.706 "num_base_bdevs": 2, 00:12:38.706 "num_base_bdevs_discovered": 2, 00:12:38.706 "num_base_bdevs_operational": 2, 00:12:38.706 "base_bdevs_list": [ 00:12:38.706 { 00:12:38.706 "name": "spare", 00:12:38.706 "uuid": "a493b9ec-4066-58ae-9e69-f4f3ac04a223", 00:12:38.706 "is_configured": true, 00:12:38.706 "data_offset": 0, 00:12:38.706 "data_size": 65536 00:12:38.706 }, 00:12:38.706 { 00:12:38.706 "name": "BaseBdev2", 00:12:38.706 "uuid": "c05e759d-a9c3-55eb-8dcf-53b1870a47dd", 00:12:38.706 "is_configured": true, 00:12:38.706 "data_offset": 0, 00:12:38.706 "data_size": 65536 00:12:38.706 } 00:12:38.706 ] 00:12:38.706 }' 00:12:38.706 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.706 10:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.965 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:38.965 10:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.965 10:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.965 [2024-11-19 10:23:52.692970] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:38.966 [2024-11-19 10:23:52.693010] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:39.225 00:12:39.225 Latency(us) 00:12:39.225 [2024-11-19T10:23:53.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:39.225 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:39.225 raid_bdev1 : 8.80 87.01 261.04 0.00 0.00 15951.49 313.01 113557.58 00:12:39.225 [2024-11-19T10:23:53.006Z] =================================================================================================================== 00:12:39.225 [2024-11-19T10:23:53.006Z] Total : 87.01 261.04 0.00 0.00 15951.49 313.01 113557.58 00:12:39.225 [2024-11-19 10:23:52.773153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.225 [2024-11-19 10:23:52.773193] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:39.225 [2024-11-19 10:23:52.773286] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:39.225 [2024-11-19 10:23:52.773307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:39.225 { 00:12:39.225 "results": [ 00:12:39.225 { 00:12:39.225 "job": "raid_bdev1", 00:12:39.225 "core_mask": "0x1", 00:12:39.225 "workload": "randrw", 00:12:39.225 "percentage": 50, 00:12:39.225 "status": "finished", 00:12:39.225 "queue_depth": 2, 00:12:39.225 "io_size": 3145728, 00:12:39.225 "runtime": 8.803138, 00:12:39.225 "iops": 87.01442599218596, 00:12:39.225 "mibps": 261.0432779765579, 00:12:39.225 "io_failed": 0, 00:12:39.225 "io_timeout": 0, 00:12:39.225 "avg_latency_us": 15951.487347646138, 00:12:39.225 "min_latency_us": 313.0131004366812, 00:12:39.225 "max_latency_us": 113557.57554585153 00:12:39.225 } 00:12:39.225 ], 00:12:39.225 "core_count": 1 00:12:39.225 } 00:12:39.225 10:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.225 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.225 10:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.226 10:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.226 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:39.226 10:23:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.226 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:39.226 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:39.226 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:39.226 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:39.226 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:39.226 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:39.226 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:39.226 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:39.226 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:39.226 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:39.226 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:39.226 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:39.226 10:23:52 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:39.486 /dev/nbd0 00:12:39.486 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:39.486 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:39.486 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:39.486 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:39.486 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:39.486 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:39.486 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:39.486 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:39.486 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:39.486 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:39.486 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:39.486 1+0 records in 00:12:39.486 1+0 records out 00:12:39.486 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283058 s, 14.5 MB/s 00:12:39.486 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.486 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:39.486 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.486 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:39.486 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:39.486 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:39.486 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:39.486 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:39.486 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:39.486 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:39.486 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:39.486 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:39.486 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:39.486 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:39.486 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:39.486 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:39.486 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:39.486 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:39.486 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:39.746 /dev/nbd1 00:12:39.746 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:39.746 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:39.746 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:39.746 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:39.746 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:39.746 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:39.746 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:39.746 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:39.746 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:39.746 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:39.746 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:39.746 1+0 records in 00:12:39.746 1+0 records out 00:12:39.746 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377216 s, 10.9 MB/s 00:12:39.746 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.746 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:39.746 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:39.746 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:39.746 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:39.746 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:39.746 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:39.746 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:39.746 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:39.746 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:39.746 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:39.746 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:39.746 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:39.746 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:39.746 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:40.006 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:40.006 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:40.006 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:40.006 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:40.006 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:40.006 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:40.006 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:40.006 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:40.006 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:40.006 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:40.006 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:40.006 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:40.006 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:40.006 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:40.006 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:40.266 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:40.266 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:40.266 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:40.266 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:40.266 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:40.266 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:40.266 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:40.266 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:40.266 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:40.266 10:23:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76197 00:12:40.266 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76197 ']' 00:12:40.266 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76197 00:12:40.266 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:12:40.266 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.266 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76197 00:12:40.266 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:40.266 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:40.266 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76197' 00:12:40.266 killing process with pid 76197 00:12:40.266 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76197 00:12:40.266 Received shutdown signal, test time was about 9.980900 seconds 00:12:40.266 00:12:40.266 Latency(us) 00:12:40.266 [2024-11-19T10:23:54.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:40.266 [2024-11-19T10:23:54.047Z] =================================================================================================================== 00:12:40.266 [2024-11-19T10:23:54.047Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:40.267 [2024-11-19 10:23:53.928316] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:40.267 10:23:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76197 00:12:40.526 [2024-11-19 10:23:54.150289] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:41.914 00:12:41.914 real 0m12.965s 00:12:41.914 user 0m16.061s 00:12:41.914 sys 0m1.411s 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.914 ************************************ 00:12:41.914 END TEST raid_rebuild_test_io 00:12:41.914 ************************************ 00:12:41.914 10:23:55 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:12:41.914 10:23:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:41.914 10:23:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:41.914 10:23:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:41.914 ************************************ 00:12:41.914 START TEST raid_rebuild_test_sb_io 00:12:41.914 ************************************ 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76588 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76588 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 76588 ']' 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:41.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:41.914 10:23:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.914 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:41.914 Zero copy mechanism will not be used. 00:12:41.914 [2024-11-19 10:23:55.429389] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:12:41.914 [2024-11-19 10:23:55.429501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76588 ] 00:12:41.914 [2024-11-19 10:23:55.602928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.174 [2024-11-19 10:23:55.715188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.174 [2024-11-19 10:23:55.910130] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:42.174 [2024-11-19 10:23:55.910190] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:42.743 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:42.743 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:12:42.743 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:42.743 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:42.743 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.743 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.743 BaseBdev1_malloc 00:12:42.743 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.743 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:42.743 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.743 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.743 [2024-11-19 10:23:56.285203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:42.743 [2024-11-19 10:23:56.285287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.743 [2024-11-19 10:23:56.285311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:42.743 [2024-11-19 10:23:56.285323] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.743 [2024-11-19 10:23:56.287373] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.743 [2024-11-19 10:23:56.287415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:42.743 BaseBdev1 00:12:42.743 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.743 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:42.743 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:42.743 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.743 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.743 BaseBdev2_malloc 00:12:42.743 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.743 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:42.743 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.743 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.743 [2024-11-19 10:23:56.338760] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:42.743 [2024-11-19 10:23:56.338816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.743 [2024-11-19 10:23:56.338850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:42.743 [2024-11-19 10:23:56.338864] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.743 [2024-11-19 10:23:56.341050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.743 [2024-11-19 10:23:56.341085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:42.743 BaseBdev2 00:12:42.743 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.743 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:42.743 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.743 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.743 spare_malloc 00:12:42.743 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.743 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:42.744 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.744 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.744 spare_delay 00:12:42.744 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.744 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:42.744 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.744 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.744 [2024-11-19 10:23:56.417559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:42.744 [2024-11-19 10:23:56.417616] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.744 [2024-11-19 10:23:56.417635] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:42.744 [2024-11-19 10:23:56.417647] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.744 [2024-11-19 10:23:56.419701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.744 [2024-11-19 10:23:56.419742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:42.744 spare 00:12:42.744 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.744 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:42.744 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.744 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.744 [2024-11-19 10:23:56.429593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:42.744 [2024-11-19 10:23:56.431302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:42.744 [2024-11-19 10:23:56.431460] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:42.744 [2024-11-19 10:23:56.431483] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:42.744 [2024-11-19 10:23:56.431728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:42.744 [2024-11-19 10:23:56.431895] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:42.744 [2024-11-19 10:23:56.431908] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:42.744 [2024-11-19 10:23:56.432071] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.744 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.744 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:42.744 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.744 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.744 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.744 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.744 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:42.744 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.744 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.744 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.744 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.744 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.744 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.744 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.744 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.744 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.744 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.744 "name": "raid_bdev1", 00:12:42.744 "uuid": "6144edfe-b3ab-4d24-9db1-4dfcccb4e76e", 00:12:42.744 "strip_size_kb": 0, 00:12:42.744 "state": "online", 00:12:42.744 "raid_level": "raid1", 00:12:42.744 "superblock": true, 00:12:42.744 "num_base_bdevs": 2, 00:12:42.744 "num_base_bdevs_discovered": 2, 00:12:42.744 "num_base_bdevs_operational": 2, 00:12:42.744 "base_bdevs_list": [ 00:12:42.744 { 00:12:42.744 "name": "BaseBdev1", 00:12:42.744 "uuid": "6fa67363-b67c-5d22-a041-3d707f98d412", 00:12:42.744 "is_configured": true, 00:12:42.744 "data_offset": 2048, 00:12:42.744 "data_size": 63488 00:12:42.744 }, 00:12:42.744 { 00:12:42.744 "name": "BaseBdev2", 00:12:42.744 "uuid": "774ba66b-f379-5742-a012-66a5145cac60", 00:12:42.744 "is_configured": true, 00:12:42.744 "data_offset": 2048, 00:12:42.744 "data_size": 63488 00:12:42.744 } 00:12:42.744 ] 00:12:42.744 }' 00:12:42.744 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.744 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:43.312 [2024-11-19 10:23:56.869114] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.312 [2024-11-19 10:23:56.960635] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.312 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.312 "name": "raid_bdev1", 00:12:43.312 "uuid": "6144edfe-b3ab-4d24-9db1-4dfcccb4e76e", 00:12:43.312 "strip_size_kb": 0, 00:12:43.312 "state": "online", 00:12:43.312 "raid_level": "raid1", 00:12:43.312 "superblock": true, 00:12:43.312 "num_base_bdevs": 2, 00:12:43.312 "num_base_bdevs_discovered": 1, 00:12:43.312 "num_base_bdevs_operational": 1, 00:12:43.312 "base_bdevs_list": [ 00:12:43.312 { 00:12:43.312 "name": null, 00:12:43.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.312 "is_configured": false, 00:12:43.312 "data_offset": 0, 00:12:43.312 "data_size": 63488 00:12:43.312 }, 00:12:43.312 { 00:12:43.312 "name": "BaseBdev2", 00:12:43.312 "uuid": "774ba66b-f379-5742-a012-66a5145cac60", 00:12:43.313 "is_configured": true, 00:12:43.313 "data_offset": 2048, 00:12:43.313 "data_size": 63488 00:12:43.313 } 00:12:43.313 ] 00:12:43.313 }' 00:12:43.313 10:23:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.313 10:23:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.313 [2024-11-19 10:23:57.064654] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:43.313 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:43.313 Zero copy mechanism will not be used. 00:12:43.313 Running I/O for 60 seconds... 00:12:43.572 10:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:43.572 10:23:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.572 10:23:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.572 [2024-11-19 10:23:57.328486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:43.572 10:23:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.572 10:23:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:43.832 [2024-11-19 10:23:57.366968] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:43.832 [2024-11-19 10:23:57.368840] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:43.832 [2024-11-19 10:23:57.482976] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:43.832 [2024-11-19 10:23:57.483468] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:44.093 [2024-11-19 10:23:57.692722] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:44.093 [2024-11-19 10:23:57.692963] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:44.614 145.00 IOPS, 435.00 MiB/s [2024-11-19T10:23:58.395Z] [2024-11-19 10:23:58.150327] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:44.614 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:44.614 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.614 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:44.614 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:44.614 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.614 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.614 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.614 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.614 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.614 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.874 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.874 "name": "raid_bdev1", 00:12:44.874 "uuid": "6144edfe-b3ab-4d24-9db1-4dfcccb4e76e", 00:12:44.874 "strip_size_kb": 0, 00:12:44.874 "state": "online", 00:12:44.874 "raid_level": "raid1", 00:12:44.874 "superblock": true, 00:12:44.874 "num_base_bdevs": 2, 00:12:44.874 "num_base_bdevs_discovered": 2, 00:12:44.874 "num_base_bdevs_operational": 2, 00:12:44.874 "process": { 00:12:44.874 "type": "rebuild", 00:12:44.874 "target": "spare", 00:12:44.874 "progress": { 00:12:44.874 "blocks": 12288, 00:12:44.874 "percent": 19 00:12:44.874 } 00:12:44.874 }, 00:12:44.874 "base_bdevs_list": [ 00:12:44.874 { 00:12:44.874 "name": "spare", 00:12:44.874 "uuid": "954e64bd-25d5-5d8a-acb4-ca38668d64fa", 00:12:44.874 "is_configured": true, 00:12:44.874 "data_offset": 2048, 00:12:44.874 "data_size": 63488 00:12:44.874 }, 00:12:44.874 { 00:12:44.874 "name": "BaseBdev2", 00:12:44.874 "uuid": "774ba66b-f379-5742-a012-66a5145cac60", 00:12:44.874 "is_configured": true, 00:12:44.874 "data_offset": 2048, 00:12:44.874 "data_size": 63488 00:12:44.874 } 00:12:44.874 ] 00:12:44.874 }' 00:12:44.874 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.874 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:44.874 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.874 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:44.874 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:44.874 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.874 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.874 [2024-11-19 10:23:58.496017] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:44.874 [2024-11-19 10:23:58.496076] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:44.874 [2024-11-19 10:23:58.496366] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:44.874 [2024-11-19 10:23:58.597825] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:44.874 [2024-11-19 10:23:58.610362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.874 [2024-11-19 10:23:58.610408] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:44.874 [2024-11-19 10:23:58.610436] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:45.134 [2024-11-19 10:23:58.655576] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:45.134 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.134 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:45.134 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.134 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.134 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.134 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.134 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:45.134 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.134 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.134 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.134 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.134 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.134 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.134 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.134 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.134 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.134 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.134 "name": "raid_bdev1", 00:12:45.134 "uuid": "6144edfe-b3ab-4d24-9db1-4dfcccb4e76e", 00:12:45.134 "strip_size_kb": 0, 00:12:45.135 "state": "online", 00:12:45.135 "raid_level": "raid1", 00:12:45.135 "superblock": true, 00:12:45.135 "num_base_bdevs": 2, 00:12:45.135 "num_base_bdevs_discovered": 1, 00:12:45.135 "num_base_bdevs_operational": 1, 00:12:45.135 "base_bdevs_list": [ 00:12:45.135 { 00:12:45.135 "name": null, 00:12:45.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.135 "is_configured": false, 00:12:45.135 "data_offset": 0, 00:12:45.135 "data_size": 63488 00:12:45.135 }, 00:12:45.135 { 00:12:45.135 "name": "BaseBdev2", 00:12:45.135 "uuid": "774ba66b-f379-5742-a012-66a5145cac60", 00:12:45.135 "is_configured": true, 00:12:45.135 "data_offset": 2048, 00:12:45.135 "data_size": 63488 00:12:45.135 } 00:12:45.135 ] 00:12:45.135 }' 00:12:45.135 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.135 10:23:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.395 10:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:45.395 10:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:45.395 10:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:45.395 10:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:45.395 10:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:45.395 10:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.395 10:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.395 10:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.395 10:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.395 10:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.395 167.50 IOPS, 502.50 MiB/s [2024-11-19T10:23:59.176Z] 10:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:45.395 "name": "raid_bdev1", 00:12:45.395 "uuid": "6144edfe-b3ab-4d24-9db1-4dfcccb4e76e", 00:12:45.395 "strip_size_kb": 0, 00:12:45.395 "state": "online", 00:12:45.395 "raid_level": "raid1", 00:12:45.395 "superblock": true, 00:12:45.395 "num_base_bdevs": 2, 00:12:45.395 "num_base_bdevs_discovered": 1, 00:12:45.395 "num_base_bdevs_operational": 1, 00:12:45.395 "base_bdevs_list": [ 00:12:45.395 { 00:12:45.395 "name": null, 00:12:45.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.395 "is_configured": false, 00:12:45.395 "data_offset": 0, 00:12:45.395 "data_size": 63488 00:12:45.395 }, 00:12:45.395 { 00:12:45.395 "name": "BaseBdev2", 00:12:45.395 "uuid": "774ba66b-f379-5742-a012-66a5145cac60", 00:12:45.395 "is_configured": true, 00:12:45.395 "data_offset": 2048, 00:12:45.395 "data_size": 63488 00:12:45.395 } 00:12:45.395 ] 00:12:45.395 }' 00:12:45.395 10:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:45.395 10:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:45.395 10:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:45.656 10:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:45.656 10:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:45.656 10:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.656 10:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.656 [2024-11-19 10:23:59.205373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:45.656 10:23:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.656 10:23:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:45.656 [2024-11-19 10:23:59.264976] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:45.656 [2024-11-19 10:23:59.266821] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:45.656 [2024-11-19 10:23:59.379454] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:45.656 [2024-11-19 10:23:59.379887] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:45.917 [2024-11-19 10:23:59.499986] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:45.917 [2024-11-19 10:23:59.500308] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:46.177 [2024-11-19 10:23:59.729618] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:46.177 [2024-11-19 10:23:59.859124] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:46.696 172.33 IOPS, 517.00 MiB/s [2024-11-19T10:24:00.477Z] 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.696 "name": "raid_bdev1", 00:12:46.696 "uuid": "6144edfe-b3ab-4d24-9db1-4dfcccb4e76e", 00:12:46.696 "strip_size_kb": 0, 00:12:46.696 "state": "online", 00:12:46.696 "raid_level": "raid1", 00:12:46.696 "superblock": true, 00:12:46.696 "num_base_bdevs": 2, 00:12:46.696 "num_base_bdevs_discovered": 2, 00:12:46.696 "num_base_bdevs_operational": 2, 00:12:46.696 "process": { 00:12:46.696 "type": "rebuild", 00:12:46.696 "target": "spare", 00:12:46.696 "progress": { 00:12:46.696 "blocks": 14336, 00:12:46.696 "percent": 22 00:12:46.696 } 00:12:46.696 }, 00:12:46.696 "base_bdevs_list": [ 00:12:46.696 { 00:12:46.696 "name": "spare", 00:12:46.696 "uuid": "954e64bd-25d5-5d8a-acb4-ca38668d64fa", 00:12:46.696 "is_configured": true, 00:12:46.696 "data_offset": 2048, 00:12:46.696 "data_size": 63488 00:12:46.696 }, 00:12:46.696 { 00:12:46.696 "name": "BaseBdev2", 00:12:46.696 "uuid": "774ba66b-f379-5742-a012-66a5145cac60", 00:12:46.696 "is_configured": true, 00:12:46.696 "data_offset": 2048, 00:12:46.696 "data_size": 63488 00:12:46.696 } 00:12:46.696 ] 00:12:46.696 }' 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:46.696 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=405 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.696 "name": "raid_bdev1", 00:12:46.696 "uuid": "6144edfe-b3ab-4d24-9db1-4dfcccb4e76e", 00:12:46.696 "strip_size_kb": 0, 00:12:46.696 "state": "online", 00:12:46.696 "raid_level": "raid1", 00:12:46.696 "superblock": true, 00:12:46.696 "num_base_bdevs": 2, 00:12:46.696 "num_base_bdevs_discovered": 2, 00:12:46.696 "num_base_bdevs_operational": 2, 00:12:46.696 "process": { 00:12:46.696 "type": "rebuild", 00:12:46.696 "target": "spare", 00:12:46.696 "progress": { 00:12:46.696 "blocks": 16384, 00:12:46.696 "percent": 25 00:12:46.696 } 00:12:46.696 }, 00:12:46.696 "base_bdevs_list": [ 00:12:46.696 { 00:12:46.696 "name": "spare", 00:12:46.696 "uuid": "954e64bd-25d5-5d8a-acb4-ca38668d64fa", 00:12:46.696 "is_configured": true, 00:12:46.696 "data_offset": 2048, 00:12:46.696 "data_size": 63488 00:12:46.696 }, 00:12:46.696 { 00:12:46.696 "name": "BaseBdev2", 00:12:46.696 "uuid": "774ba66b-f379-5742-a012-66a5145cac60", 00:12:46.696 "is_configured": true, 00:12:46.696 "data_offset": 2048, 00:12:46.696 "data_size": 63488 00:12:46.696 } 00:12:46.696 ] 00:12:46.696 }' 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:46.696 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.956 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:46.956 10:24:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:46.956 [2024-11-19 10:24:00.544514] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:46.956 [2024-11-19 10:24:00.545081] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:47.223 [2024-11-19 10:24:00.771919] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:47.493 145.00 IOPS, 435.00 MiB/s [2024-11-19T10:24:01.274Z] [2024-11-19 10:24:01.087759] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:47.493 [2024-11-19 10:24:01.088333] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:47.753 [2024-11-19 10:24:01.308514] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:47.753 10:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:47.753 10:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:47.753 10:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.753 10:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:47.753 10:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:47.753 10:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.753 10:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.753 10:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.753 10:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.753 10:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.753 10:24:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.012 10:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.012 "name": "raid_bdev1", 00:12:48.012 "uuid": "6144edfe-b3ab-4d24-9db1-4dfcccb4e76e", 00:12:48.012 "strip_size_kb": 0, 00:12:48.012 "state": "online", 00:12:48.012 "raid_level": "raid1", 00:12:48.012 "superblock": true, 00:12:48.012 "num_base_bdevs": 2, 00:12:48.013 "num_base_bdevs_discovered": 2, 00:12:48.013 "num_base_bdevs_operational": 2, 00:12:48.013 "process": { 00:12:48.013 "type": "rebuild", 00:12:48.013 "target": "spare", 00:12:48.013 "progress": { 00:12:48.013 "blocks": 28672, 00:12:48.013 "percent": 45 00:12:48.013 } 00:12:48.013 }, 00:12:48.013 "base_bdevs_list": [ 00:12:48.013 { 00:12:48.013 "name": "spare", 00:12:48.013 "uuid": "954e64bd-25d5-5d8a-acb4-ca38668d64fa", 00:12:48.013 "is_configured": true, 00:12:48.013 "data_offset": 2048, 00:12:48.013 "data_size": 63488 00:12:48.013 }, 00:12:48.013 { 00:12:48.013 "name": "BaseBdev2", 00:12:48.013 "uuid": "774ba66b-f379-5742-a012-66a5145cac60", 00:12:48.013 "is_configured": true, 00:12:48.013 "data_offset": 2048, 00:12:48.013 "data_size": 63488 00:12:48.013 } 00:12:48.013 ] 00:12:48.013 }' 00:12:48.013 10:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.013 10:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:48.013 10:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.013 10:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:48.013 10:24:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:48.582 [2024-11-19 10:24:02.070484] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:12:48.582 125.60 IOPS, 376.80 MiB/s [2024-11-19T10:24:02.363Z] [2024-11-19 10:24:02.290284] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:49.151 10:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:49.151 10:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:49.151 10:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.151 10:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:49.151 10:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:49.151 10:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.151 10:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.151 10:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.151 10:24:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.151 10:24:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.151 10:24:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.151 10:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.151 "name": "raid_bdev1", 00:12:49.151 "uuid": "6144edfe-b3ab-4d24-9db1-4dfcccb4e76e", 00:12:49.151 "strip_size_kb": 0, 00:12:49.151 "state": "online", 00:12:49.151 "raid_level": "raid1", 00:12:49.151 "superblock": true, 00:12:49.151 "num_base_bdevs": 2, 00:12:49.151 "num_base_bdevs_discovered": 2, 00:12:49.151 "num_base_bdevs_operational": 2, 00:12:49.151 "process": { 00:12:49.151 "type": "rebuild", 00:12:49.151 "target": "spare", 00:12:49.151 "progress": { 00:12:49.151 "blocks": 49152, 00:12:49.151 "percent": 77 00:12:49.151 } 00:12:49.151 }, 00:12:49.151 "base_bdevs_list": [ 00:12:49.151 { 00:12:49.151 "name": "spare", 00:12:49.151 "uuid": "954e64bd-25d5-5d8a-acb4-ca38668d64fa", 00:12:49.151 "is_configured": true, 00:12:49.151 "data_offset": 2048, 00:12:49.151 "data_size": 63488 00:12:49.151 }, 00:12:49.151 { 00:12:49.151 "name": "BaseBdev2", 00:12:49.151 "uuid": "774ba66b-f379-5742-a012-66a5145cac60", 00:12:49.151 "is_configured": true, 00:12:49.151 "data_offset": 2048, 00:12:49.151 "data_size": 63488 00:12:49.151 } 00:12:49.151 ] 00:12:49.151 }' 00:12:49.151 10:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.151 10:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:49.151 10:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.151 10:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:49.151 10:24:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:49.411 [2024-11-19 10:24:03.072113] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:12:49.411 110.83 IOPS, 332.50 MiB/s [2024-11-19T10:24:03.192Z] [2024-11-19 10:24:03.187301] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:49.670 [2024-11-19 10:24:03.413627] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:49.930 [2024-11-19 10:24:03.513428] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:49.930 [2024-11-19 10:24:03.515329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.190 10:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:50.190 10:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.190 10:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.190 10:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.190 10:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.190 10:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.190 10:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.190 10:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.190 10:24:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.190 10:24:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.190 10:24:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.190 10:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.190 "name": "raid_bdev1", 00:12:50.190 "uuid": "6144edfe-b3ab-4d24-9db1-4dfcccb4e76e", 00:12:50.190 "strip_size_kb": 0, 00:12:50.190 "state": "online", 00:12:50.190 "raid_level": "raid1", 00:12:50.190 "superblock": true, 00:12:50.190 "num_base_bdevs": 2, 00:12:50.190 "num_base_bdevs_discovered": 2, 00:12:50.190 "num_base_bdevs_operational": 2, 00:12:50.190 "base_bdevs_list": [ 00:12:50.190 { 00:12:50.190 "name": "spare", 00:12:50.190 "uuid": "954e64bd-25d5-5d8a-acb4-ca38668d64fa", 00:12:50.190 "is_configured": true, 00:12:50.190 "data_offset": 2048, 00:12:50.190 "data_size": 63488 00:12:50.190 }, 00:12:50.190 { 00:12:50.190 "name": "BaseBdev2", 00:12:50.190 "uuid": "774ba66b-f379-5742-a012-66a5145cac60", 00:12:50.190 "is_configured": true, 00:12:50.190 "data_offset": 2048, 00:12:50.190 "data_size": 63488 00:12:50.190 } 00:12:50.190 ] 00:12:50.190 }' 00:12:50.190 10:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.190 10:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:50.190 10:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.190 10:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:50.190 10:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:50.191 10:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:50.191 10:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.191 10:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:50.191 10:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:50.191 10:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.191 10:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.191 10:24:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.191 10:24:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.191 10:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.191 10:24:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.451 10:24:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.451 "name": "raid_bdev1", 00:12:50.451 "uuid": "6144edfe-b3ab-4d24-9db1-4dfcccb4e76e", 00:12:50.451 "strip_size_kb": 0, 00:12:50.451 "state": "online", 00:12:50.451 "raid_level": "raid1", 00:12:50.451 "superblock": true, 00:12:50.451 "num_base_bdevs": 2, 00:12:50.451 "num_base_bdevs_discovered": 2, 00:12:50.451 "num_base_bdevs_operational": 2, 00:12:50.451 "base_bdevs_list": [ 00:12:50.451 { 00:12:50.451 "name": "spare", 00:12:50.451 "uuid": "954e64bd-25d5-5d8a-acb4-ca38668d64fa", 00:12:50.451 "is_configured": true, 00:12:50.451 "data_offset": 2048, 00:12:50.451 "data_size": 63488 00:12:50.451 }, 00:12:50.451 { 00:12:50.451 "name": "BaseBdev2", 00:12:50.451 "uuid": "774ba66b-f379-5742-a012-66a5145cac60", 00:12:50.451 "is_configured": true, 00:12:50.451 "data_offset": 2048, 00:12:50.451 "data_size": 63488 00:12:50.451 } 00:12:50.451 ] 00:12:50.451 }' 00:12:50.451 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.451 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:50.451 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.451 99.86 IOPS, 299.57 MiB/s [2024-11-19T10:24:04.232Z] 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:50.451 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:50.451 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.451 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.451 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.451 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.451 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:50.451 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.451 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.451 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.451 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.451 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.451 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.451 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.451 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.451 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.451 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.451 "name": "raid_bdev1", 00:12:50.451 "uuid": "6144edfe-b3ab-4d24-9db1-4dfcccb4e76e", 00:12:50.451 "strip_size_kb": 0, 00:12:50.451 "state": "online", 00:12:50.451 "raid_level": "raid1", 00:12:50.451 "superblock": true, 00:12:50.451 "num_base_bdevs": 2, 00:12:50.451 "num_base_bdevs_discovered": 2, 00:12:50.451 "num_base_bdevs_operational": 2, 00:12:50.451 "base_bdevs_list": [ 00:12:50.451 { 00:12:50.451 "name": "spare", 00:12:50.451 "uuid": "954e64bd-25d5-5d8a-acb4-ca38668d64fa", 00:12:50.451 "is_configured": true, 00:12:50.451 "data_offset": 2048, 00:12:50.451 "data_size": 63488 00:12:50.451 }, 00:12:50.451 { 00:12:50.451 "name": "BaseBdev2", 00:12:50.451 "uuid": "774ba66b-f379-5742-a012-66a5145cac60", 00:12:50.451 "is_configured": true, 00:12:50.451 "data_offset": 2048, 00:12:50.451 "data_size": 63488 00:12:50.451 } 00:12:50.451 ] 00:12:50.451 }' 00:12:50.451 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.451 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.020 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:51.020 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.020 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.020 [2024-11-19 10:24:04.543457] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:51.020 [2024-11-19 10:24:04.543499] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:51.020 00:12:51.020 Latency(us) 00:12:51.020 [2024-11-19T10:24:04.801Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:51.020 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:51.020 raid_bdev1 : 7.58 94.26 282.77 0.00 0.00 14356.66 305.86 113557.58 00:12:51.020 [2024-11-19T10:24:04.801Z] =================================================================================================================== 00:12:51.020 [2024-11-19T10:24:04.801Z] Total : 94.26 282.77 0.00 0.00 14356.66 305.86 113557.58 00:12:51.020 [2024-11-19 10:24:04.648681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.020 [2024-11-19 10:24:04.648736] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:51.020 [2024-11-19 10:24:04.648811] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:51.020 [2024-11-19 10:24:04.648823] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:51.020 { 00:12:51.020 "results": [ 00:12:51.020 { 00:12:51.020 "job": "raid_bdev1", 00:12:51.020 "core_mask": "0x1", 00:12:51.020 "workload": "randrw", 00:12:51.020 "percentage": 50, 00:12:51.020 "status": "finished", 00:12:51.020 "queue_depth": 2, 00:12:51.020 "io_size": 3145728, 00:12:51.020 "runtime": 7.575194, 00:12:51.020 "iops": 94.25501181883922, 00:12:51.020 "mibps": 282.7650354565177, 00:12:51.020 "io_failed": 0, 00:12:51.020 "io_timeout": 0, 00:12:51.020 "avg_latency_us": 14356.65836360745, 00:12:51.020 "min_latency_us": 305.8585152838428, 00:12:51.020 "max_latency_us": 113557.57554585153 00:12:51.020 } 00:12:51.020 ], 00:12:51.020 "core_count": 1 00:12:51.020 } 00:12:51.020 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.020 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.020 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:51.020 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.020 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:51.020 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.020 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:51.020 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:51.020 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:51.020 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:51.020 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:51.020 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:51.020 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:51.020 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:51.020 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:51.020 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:51.020 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:51.020 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:51.020 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:51.281 /dev/nbd0 00:12:51.281 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:51.281 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:51.281 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:51.281 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:51.281 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:51.281 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:51.281 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:51.281 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:51.281 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:51.281 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:51.281 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.281 1+0 records in 00:12:51.281 1+0 records out 00:12:51.281 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421157 s, 9.7 MB/s 00:12:51.281 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.281 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:51.281 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.281 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:51.281 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:51.281 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:51.281 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:51.281 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:51.281 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:51.281 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:51.281 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:51.281 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:51.281 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:51.281 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:51.281 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:51.281 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:51.281 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:51.281 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:51.281 10:24:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:51.541 /dev/nbd1 00:12:51.541 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:51.541 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:51.541 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:51.541 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:51.541 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:51.541 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:51.541 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:51.541 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:51.541 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:51.541 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:51.541 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.541 1+0 records in 00:12:51.541 1+0 records out 00:12:51.541 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039053 s, 10.5 MB/s 00:12:51.541 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.541 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:51.541 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.541 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:51.541 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:51.541 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:51.541 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:51.541 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:51.801 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:51.801 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:51.801 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:51.801 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:51.801 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:51.801 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:51.801 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:52.060 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:52.060 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:52.060 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:52.060 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.060 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.060 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:52.060 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:52.060 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.060 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:52.060 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:52.060 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:52.060 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:52.060 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:52.060 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.060 10:24:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:52.321 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:52.321 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:52.321 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:52.321 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.321 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.321 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:52.321 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:52.321 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.321 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:52.321 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:52.321 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.321 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.321 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.321 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:52.321 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.321 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.321 [2024-11-19 10:24:06.044170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:52.321 [2024-11-19 10:24:06.044236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.321 [2024-11-19 10:24:06.044260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:52.321 [2024-11-19 10:24:06.044273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.321 [2024-11-19 10:24:06.046660] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.321 [2024-11-19 10:24:06.046708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:52.321 [2024-11-19 10:24:06.046814] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:52.321 [2024-11-19 10:24:06.046876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:52.321 [2024-11-19 10:24:06.047096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:52.321 spare 00:12:52.321 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.321 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:52.321 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.321 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.581 [2024-11-19 10:24:06.147045] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:52.581 [2024-11-19 10:24:06.147070] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:52.581 [2024-11-19 10:24:06.147355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:12:52.581 [2024-11-19 10:24:06.147550] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:52.581 [2024-11-19 10:24:06.147571] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:52.581 [2024-11-19 10:24:06.147737] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.581 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.581 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:52.581 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.581 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.581 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.581 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.581 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:52.581 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.581 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.581 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.581 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.581 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.581 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.581 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.581 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.581 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.581 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.581 "name": "raid_bdev1", 00:12:52.581 "uuid": "6144edfe-b3ab-4d24-9db1-4dfcccb4e76e", 00:12:52.581 "strip_size_kb": 0, 00:12:52.581 "state": "online", 00:12:52.581 "raid_level": "raid1", 00:12:52.581 "superblock": true, 00:12:52.581 "num_base_bdevs": 2, 00:12:52.581 "num_base_bdevs_discovered": 2, 00:12:52.581 "num_base_bdevs_operational": 2, 00:12:52.581 "base_bdevs_list": [ 00:12:52.581 { 00:12:52.581 "name": "spare", 00:12:52.581 "uuid": "954e64bd-25d5-5d8a-acb4-ca38668d64fa", 00:12:52.581 "is_configured": true, 00:12:52.581 "data_offset": 2048, 00:12:52.581 "data_size": 63488 00:12:52.581 }, 00:12:52.581 { 00:12:52.581 "name": "BaseBdev2", 00:12:52.581 "uuid": "774ba66b-f379-5742-a012-66a5145cac60", 00:12:52.581 "is_configured": true, 00:12:52.581 "data_offset": 2048, 00:12:52.581 "data_size": 63488 00:12:52.581 } 00:12:52.581 ] 00:12:52.581 }' 00:12:52.581 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.581 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.842 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:52.842 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.842 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:52.842 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:52.842 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.842 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.842 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.842 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.842 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.104 "name": "raid_bdev1", 00:12:53.104 "uuid": "6144edfe-b3ab-4d24-9db1-4dfcccb4e76e", 00:12:53.104 "strip_size_kb": 0, 00:12:53.104 "state": "online", 00:12:53.104 "raid_level": "raid1", 00:12:53.104 "superblock": true, 00:12:53.104 "num_base_bdevs": 2, 00:12:53.104 "num_base_bdevs_discovered": 2, 00:12:53.104 "num_base_bdevs_operational": 2, 00:12:53.104 "base_bdevs_list": [ 00:12:53.104 { 00:12:53.104 "name": "spare", 00:12:53.104 "uuid": "954e64bd-25d5-5d8a-acb4-ca38668d64fa", 00:12:53.104 "is_configured": true, 00:12:53.104 "data_offset": 2048, 00:12:53.104 "data_size": 63488 00:12:53.104 }, 00:12:53.104 { 00:12:53.104 "name": "BaseBdev2", 00:12:53.104 "uuid": "774ba66b-f379-5742-a012-66a5145cac60", 00:12:53.104 "is_configured": true, 00:12:53.104 "data_offset": 2048, 00:12:53.104 "data_size": 63488 00:12:53.104 } 00:12:53.104 ] 00:12:53.104 }' 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.104 [2024-11-19 10:24:06.811132] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.104 "name": "raid_bdev1", 00:12:53.104 "uuid": "6144edfe-b3ab-4d24-9db1-4dfcccb4e76e", 00:12:53.104 "strip_size_kb": 0, 00:12:53.104 "state": "online", 00:12:53.104 "raid_level": "raid1", 00:12:53.104 "superblock": true, 00:12:53.104 "num_base_bdevs": 2, 00:12:53.104 "num_base_bdevs_discovered": 1, 00:12:53.104 "num_base_bdevs_operational": 1, 00:12:53.104 "base_bdevs_list": [ 00:12:53.104 { 00:12:53.104 "name": null, 00:12:53.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.104 "is_configured": false, 00:12:53.104 "data_offset": 0, 00:12:53.104 "data_size": 63488 00:12:53.104 }, 00:12:53.104 { 00:12:53.104 "name": "BaseBdev2", 00:12:53.104 "uuid": "774ba66b-f379-5742-a012-66a5145cac60", 00:12:53.104 "is_configured": true, 00:12:53.104 "data_offset": 2048, 00:12:53.104 "data_size": 63488 00:12:53.104 } 00:12:53.104 ] 00:12:53.104 }' 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.104 10:24:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.676 10:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:53.676 10:24:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.676 10:24:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:53.676 [2024-11-19 10:24:07.266447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:53.676 [2024-11-19 10:24:07.266660] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:53.676 [2024-11-19 10:24:07.266681] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:53.676 [2024-11-19 10:24:07.266726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:53.676 [2024-11-19 10:24:07.285385] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:12:53.676 10:24:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.676 10:24:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:53.676 [2024-11-19 10:24:07.287516] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:54.616 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.616 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.616 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.616 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.616 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.616 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.616 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.616 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.616 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.616 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.616 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.616 "name": "raid_bdev1", 00:12:54.616 "uuid": "6144edfe-b3ab-4d24-9db1-4dfcccb4e76e", 00:12:54.616 "strip_size_kb": 0, 00:12:54.616 "state": "online", 00:12:54.616 "raid_level": "raid1", 00:12:54.616 "superblock": true, 00:12:54.616 "num_base_bdevs": 2, 00:12:54.616 "num_base_bdevs_discovered": 2, 00:12:54.616 "num_base_bdevs_operational": 2, 00:12:54.616 "process": { 00:12:54.616 "type": "rebuild", 00:12:54.616 "target": "spare", 00:12:54.616 "progress": { 00:12:54.616 "blocks": 20480, 00:12:54.616 "percent": 32 00:12:54.616 } 00:12:54.616 }, 00:12:54.616 "base_bdevs_list": [ 00:12:54.616 { 00:12:54.616 "name": "spare", 00:12:54.616 "uuid": "954e64bd-25d5-5d8a-acb4-ca38668d64fa", 00:12:54.616 "is_configured": true, 00:12:54.616 "data_offset": 2048, 00:12:54.616 "data_size": 63488 00:12:54.616 }, 00:12:54.616 { 00:12:54.616 "name": "BaseBdev2", 00:12:54.616 "uuid": "774ba66b-f379-5742-a012-66a5145cac60", 00:12:54.616 "is_configured": true, 00:12:54.616 "data_offset": 2048, 00:12:54.616 "data_size": 63488 00:12:54.616 } 00:12:54.616 ] 00:12:54.616 }' 00:12:54.616 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.616 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.616 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.876 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.876 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:54.876 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.876 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.876 [2024-11-19 10:24:08.427673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:54.876 [2024-11-19 10:24:08.493345] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:54.876 [2024-11-19 10:24:08.493409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.876 [2024-11-19 10:24:08.493445] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:54.876 [2024-11-19 10:24:08.493453] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:54.876 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.876 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:54.876 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.876 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.876 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.876 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.876 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:54.876 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.876 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.876 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.876 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.876 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.876 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.876 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.876 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:54.876 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.876 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.876 "name": "raid_bdev1", 00:12:54.876 "uuid": "6144edfe-b3ab-4d24-9db1-4dfcccb4e76e", 00:12:54.876 "strip_size_kb": 0, 00:12:54.876 "state": "online", 00:12:54.876 "raid_level": "raid1", 00:12:54.876 "superblock": true, 00:12:54.876 "num_base_bdevs": 2, 00:12:54.876 "num_base_bdevs_discovered": 1, 00:12:54.876 "num_base_bdevs_operational": 1, 00:12:54.876 "base_bdevs_list": [ 00:12:54.876 { 00:12:54.876 "name": null, 00:12:54.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.876 "is_configured": false, 00:12:54.876 "data_offset": 0, 00:12:54.876 "data_size": 63488 00:12:54.876 }, 00:12:54.876 { 00:12:54.876 "name": "BaseBdev2", 00:12:54.876 "uuid": "774ba66b-f379-5742-a012-66a5145cac60", 00:12:54.876 "is_configured": true, 00:12:54.876 "data_offset": 2048, 00:12:54.876 "data_size": 63488 00:12:54.876 } 00:12:54.876 ] 00:12:54.876 }' 00:12:54.876 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.876 10:24:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.445 10:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:55.445 10:24:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.445 10:24:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:55.445 [2024-11-19 10:24:09.007949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:55.445 [2024-11-19 10:24:09.008033] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.445 [2024-11-19 10:24:09.008064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:55.445 [2024-11-19 10:24:09.008072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.445 [2024-11-19 10:24:09.008544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.445 [2024-11-19 10:24:09.008570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:55.445 [2024-11-19 10:24:09.008676] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:55.445 [2024-11-19 10:24:09.008693] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:55.445 [2024-11-19 10:24:09.008704] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:55.445 [2024-11-19 10:24:09.008727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:55.445 [2024-11-19 10:24:09.024521] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:12:55.445 spare 00:12:55.445 10:24:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.445 [2024-11-19 10:24:09.026323] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:55.445 10:24:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:56.384 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:56.384 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.384 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:56.384 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:56.384 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.384 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.384 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.384 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.384 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.384 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.384 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.384 "name": "raid_bdev1", 00:12:56.384 "uuid": "6144edfe-b3ab-4d24-9db1-4dfcccb4e76e", 00:12:56.384 "strip_size_kb": 0, 00:12:56.384 "state": "online", 00:12:56.384 "raid_level": "raid1", 00:12:56.384 "superblock": true, 00:12:56.384 "num_base_bdevs": 2, 00:12:56.384 "num_base_bdevs_discovered": 2, 00:12:56.384 "num_base_bdevs_operational": 2, 00:12:56.384 "process": { 00:12:56.384 "type": "rebuild", 00:12:56.384 "target": "spare", 00:12:56.384 "progress": { 00:12:56.384 "blocks": 20480, 00:12:56.384 "percent": 32 00:12:56.384 } 00:12:56.384 }, 00:12:56.384 "base_bdevs_list": [ 00:12:56.384 { 00:12:56.384 "name": "spare", 00:12:56.384 "uuid": "954e64bd-25d5-5d8a-acb4-ca38668d64fa", 00:12:56.384 "is_configured": true, 00:12:56.384 "data_offset": 2048, 00:12:56.384 "data_size": 63488 00:12:56.384 }, 00:12:56.384 { 00:12:56.384 "name": "BaseBdev2", 00:12:56.384 "uuid": "774ba66b-f379-5742-a012-66a5145cac60", 00:12:56.384 "is_configured": true, 00:12:56.384 "data_offset": 2048, 00:12:56.384 "data_size": 63488 00:12:56.384 } 00:12:56.384 ] 00:12:56.384 }' 00:12:56.384 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.384 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:56.384 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.384 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:56.384 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:56.384 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.384 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.384 [2024-11-19 10:24:10.150178] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:56.644 [2024-11-19 10:24:10.230974] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:56.644 [2024-11-19 10:24:10.231061] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.644 [2024-11-19 10:24:10.231075] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:56.644 [2024-11-19 10:24:10.231084] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:56.644 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.644 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:56.644 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.644 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.644 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.644 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.644 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:56.644 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.644 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.644 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.644 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.644 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.644 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.644 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:56.644 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.644 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.644 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.644 "name": "raid_bdev1", 00:12:56.644 "uuid": "6144edfe-b3ab-4d24-9db1-4dfcccb4e76e", 00:12:56.644 "strip_size_kb": 0, 00:12:56.644 "state": "online", 00:12:56.644 "raid_level": "raid1", 00:12:56.644 "superblock": true, 00:12:56.644 "num_base_bdevs": 2, 00:12:56.644 "num_base_bdevs_discovered": 1, 00:12:56.644 "num_base_bdevs_operational": 1, 00:12:56.644 "base_bdevs_list": [ 00:12:56.644 { 00:12:56.644 "name": null, 00:12:56.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.644 "is_configured": false, 00:12:56.644 "data_offset": 0, 00:12:56.644 "data_size": 63488 00:12:56.644 }, 00:12:56.644 { 00:12:56.644 "name": "BaseBdev2", 00:12:56.644 "uuid": "774ba66b-f379-5742-a012-66a5145cac60", 00:12:56.644 "is_configured": true, 00:12:56.644 "data_offset": 2048, 00:12:56.644 "data_size": 63488 00:12:56.644 } 00:12:56.644 ] 00:12:56.644 }' 00:12:56.644 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.644 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.214 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:57.214 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.214 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:57.214 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:57.214 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.214 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.214 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.214 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.214 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.214 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.214 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.214 "name": "raid_bdev1", 00:12:57.214 "uuid": "6144edfe-b3ab-4d24-9db1-4dfcccb4e76e", 00:12:57.214 "strip_size_kb": 0, 00:12:57.214 "state": "online", 00:12:57.214 "raid_level": "raid1", 00:12:57.214 "superblock": true, 00:12:57.214 "num_base_bdevs": 2, 00:12:57.214 "num_base_bdevs_discovered": 1, 00:12:57.214 "num_base_bdevs_operational": 1, 00:12:57.214 "base_bdevs_list": [ 00:12:57.214 { 00:12:57.214 "name": null, 00:12:57.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.214 "is_configured": false, 00:12:57.214 "data_offset": 0, 00:12:57.214 "data_size": 63488 00:12:57.214 }, 00:12:57.214 { 00:12:57.214 "name": "BaseBdev2", 00:12:57.214 "uuid": "774ba66b-f379-5742-a012-66a5145cac60", 00:12:57.214 "is_configured": true, 00:12:57.214 "data_offset": 2048, 00:12:57.214 "data_size": 63488 00:12:57.214 } 00:12:57.214 ] 00:12:57.215 }' 00:12:57.215 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.215 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:57.215 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.215 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:57.215 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:57.215 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.215 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.215 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.215 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:57.215 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.215 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:57.215 [2024-11-19 10:24:10.907013] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:57.215 [2024-11-19 10:24:10.907087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.215 [2024-11-19 10:24:10.907110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:57.215 [2024-11-19 10:24:10.907123] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.215 [2024-11-19 10:24:10.907650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.215 [2024-11-19 10:24:10.907685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:57.215 [2024-11-19 10:24:10.907773] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:57.215 [2024-11-19 10:24:10.907798] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:57.215 [2024-11-19 10:24:10.907808] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:57.215 [2024-11-19 10:24:10.907821] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:57.215 BaseBdev1 00:12:57.215 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.215 10:24:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:58.155 10:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:58.155 10:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.155 10:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.155 10:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.155 10:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.155 10:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:58.155 10:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.155 10:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.155 10:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.155 10:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.155 10:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.155 10:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.155 10:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.155 10:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.155 10:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.414 10:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.414 "name": "raid_bdev1", 00:12:58.414 "uuid": "6144edfe-b3ab-4d24-9db1-4dfcccb4e76e", 00:12:58.414 "strip_size_kb": 0, 00:12:58.414 "state": "online", 00:12:58.414 "raid_level": "raid1", 00:12:58.415 "superblock": true, 00:12:58.415 "num_base_bdevs": 2, 00:12:58.415 "num_base_bdevs_discovered": 1, 00:12:58.415 "num_base_bdevs_operational": 1, 00:12:58.415 "base_bdevs_list": [ 00:12:58.415 { 00:12:58.415 "name": null, 00:12:58.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.415 "is_configured": false, 00:12:58.415 "data_offset": 0, 00:12:58.415 "data_size": 63488 00:12:58.415 }, 00:12:58.415 { 00:12:58.415 "name": "BaseBdev2", 00:12:58.415 "uuid": "774ba66b-f379-5742-a012-66a5145cac60", 00:12:58.415 "is_configured": true, 00:12:58.415 "data_offset": 2048, 00:12:58.415 "data_size": 63488 00:12:58.415 } 00:12:58.415 ] 00:12:58.415 }' 00:12:58.415 10:24:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.415 10:24:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.674 10:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:58.674 10:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.674 10:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:58.674 10:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:58.674 10:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.674 10:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.674 10:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.674 10:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.674 10:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.674 10:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.674 10:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.674 "name": "raid_bdev1", 00:12:58.674 "uuid": "6144edfe-b3ab-4d24-9db1-4dfcccb4e76e", 00:12:58.674 "strip_size_kb": 0, 00:12:58.674 "state": "online", 00:12:58.674 "raid_level": "raid1", 00:12:58.674 "superblock": true, 00:12:58.674 "num_base_bdevs": 2, 00:12:58.674 "num_base_bdevs_discovered": 1, 00:12:58.674 "num_base_bdevs_operational": 1, 00:12:58.674 "base_bdevs_list": [ 00:12:58.674 { 00:12:58.674 "name": null, 00:12:58.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.674 "is_configured": false, 00:12:58.674 "data_offset": 0, 00:12:58.674 "data_size": 63488 00:12:58.674 }, 00:12:58.674 { 00:12:58.674 "name": "BaseBdev2", 00:12:58.674 "uuid": "774ba66b-f379-5742-a012-66a5145cac60", 00:12:58.674 "is_configured": true, 00:12:58.674 "data_offset": 2048, 00:12:58.674 "data_size": 63488 00:12:58.674 } 00:12:58.674 ] 00:12:58.674 }' 00:12:58.674 10:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.933 10:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:58.933 10:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.933 10:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:58.933 10:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:58.933 10:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:12:58.933 10:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:58.933 10:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:58.933 10:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.933 10:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:58.933 10:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.933 10:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:58.933 10:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.933 10:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:58.933 [2024-11-19 10:24:12.552572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:58.933 [2024-11-19 10:24:12.552753] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:58.933 [2024-11-19 10:24:12.552779] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:58.933 request: 00:12:58.933 { 00:12:58.933 "base_bdev": "BaseBdev1", 00:12:58.933 "raid_bdev": "raid_bdev1", 00:12:58.933 "method": "bdev_raid_add_base_bdev", 00:12:58.933 "req_id": 1 00:12:58.933 } 00:12:58.933 Got JSON-RPC error response 00:12:58.933 response: 00:12:58.933 { 00:12:58.933 "code": -22, 00:12:58.933 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:58.933 } 00:12:58.933 10:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:58.933 10:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:12:58.934 10:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:58.934 10:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:58.934 10:24:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:58.934 10:24:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:59.887 10:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:59.887 10:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.887 10:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.887 10:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.887 10:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.887 10:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:59.887 10:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.887 10:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.887 10:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.887 10:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.887 10:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.887 10:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.887 10:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.887 10:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:59.887 10:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.887 10:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.887 "name": "raid_bdev1", 00:12:59.887 "uuid": "6144edfe-b3ab-4d24-9db1-4dfcccb4e76e", 00:12:59.887 "strip_size_kb": 0, 00:12:59.887 "state": "online", 00:12:59.887 "raid_level": "raid1", 00:12:59.887 "superblock": true, 00:12:59.887 "num_base_bdevs": 2, 00:12:59.887 "num_base_bdevs_discovered": 1, 00:12:59.887 "num_base_bdevs_operational": 1, 00:12:59.887 "base_bdevs_list": [ 00:12:59.887 { 00:12:59.887 "name": null, 00:12:59.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.887 "is_configured": false, 00:12:59.887 "data_offset": 0, 00:12:59.887 "data_size": 63488 00:12:59.887 }, 00:12:59.887 { 00:12:59.887 "name": "BaseBdev2", 00:12:59.887 "uuid": "774ba66b-f379-5742-a012-66a5145cac60", 00:12:59.887 "is_configured": true, 00:12:59.887 "data_offset": 2048, 00:12:59.887 "data_size": 63488 00:12:59.887 } 00:12:59.887 ] 00:12:59.887 }' 00:12:59.887 10:24:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.887 10:24:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.456 10:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:00.456 10:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.456 10:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:00.456 10:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:00.456 10:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.456 10:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.456 10:24:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.456 10:24:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:00.456 10:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.456 10:24:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.456 10:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.456 "name": "raid_bdev1", 00:13:00.456 "uuid": "6144edfe-b3ab-4d24-9db1-4dfcccb4e76e", 00:13:00.456 "strip_size_kb": 0, 00:13:00.456 "state": "online", 00:13:00.456 "raid_level": "raid1", 00:13:00.456 "superblock": true, 00:13:00.456 "num_base_bdevs": 2, 00:13:00.456 "num_base_bdevs_discovered": 1, 00:13:00.456 "num_base_bdevs_operational": 1, 00:13:00.456 "base_bdevs_list": [ 00:13:00.456 { 00:13:00.456 "name": null, 00:13:00.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.456 "is_configured": false, 00:13:00.456 "data_offset": 0, 00:13:00.456 "data_size": 63488 00:13:00.457 }, 00:13:00.457 { 00:13:00.457 "name": "BaseBdev2", 00:13:00.457 "uuid": "774ba66b-f379-5742-a012-66a5145cac60", 00:13:00.457 "is_configured": true, 00:13:00.457 "data_offset": 2048, 00:13:00.457 "data_size": 63488 00:13:00.457 } 00:13:00.457 ] 00:13:00.457 }' 00:13:00.457 10:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.457 10:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:00.457 10:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.457 10:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:00.457 10:24:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76588 00:13:00.457 10:24:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 76588 ']' 00:13:00.457 10:24:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 76588 00:13:00.457 10:24:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:00.457 10:24:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:00.457 10:24:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76588 00:13:00.457 10:24:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:00.457 10:24:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:00.457 10:24:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76588' 00:13:00.457 killing process with pid 76588 00:13:00.457 10:24:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 76588 00:13:00.457 Received shutdown signal, test time was about 17.164127 seconds 00:13:00.457 00:13:00.457 Latency(us) 00:13:00.457 [2024-11-19T10:24:14.238Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:00.457 [2024-11-19T10:24:14.238Z] =================================================================================================================== 00:13:00.457 [2024-11-19T10:24:14.238Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:00.457 [2024-11-19 10:24:14.197803] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:00.457 [2024-11-19 10:24:14.198115] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:00.457 10:24:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 76588 00:13:00.457 [2024-11-19 10:24:14.198231] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:00.457 [2024-11-19 10:24:14.198246] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:00.717 [2024-11-19 10:24:14.481340] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:02.097 10:24:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:02.097 00:13:02.097 real 0m20.412s 00:13:02.097 user 0m26.687s 00:13:02.097 sys 0m2.164s 00:13:02.097 ************************************ 00:13:02.097 END TEST raid_rebuild_test_sb_io 00:13:02.097 ************************************ 00:13:02.097 10:24:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.097 10:24:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.097 10:24:15 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:02.097 10:24:15 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:02.097 10:24:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:02.097 10:24:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.097 10:24:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:02.097 ************************************ 00:13:02.097 START TEST raid_rebuild_test 00:13:02.097 ************************************ 00:13:02.097 10:24:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:13:02.097 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:02.097 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:02.097 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:02.097 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:02.097 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:02.097 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:02.097 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:02.097 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:02.097 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:02.097 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:02.097 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:02.097 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:02.097 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:02.097 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:02.097 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:02.097 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:02.097 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:02.097 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:02.097 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:02.097 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:02.097 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:02.097 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:02.098 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:02.098 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:02.098 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:02.098 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:02.098 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:02.098 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:02.098 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:02.098 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77277 00:13:02.098 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:02.098 10:24:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77277 00:13:02.098 10:24:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77277 ']' 00:13:02.098 10:24:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.098 10:24:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:02.098 10:24:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.098 10:24:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:02.098 10:24:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.357 [2024-11-19 10:24:15.916407] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:13:02.357 [2024-11-19 10:24:15.917024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77277 ] 00:13:02.357 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:02.357 Zero copy mechanism will not be used. 00:13:02.357 [2024-11-19 10:24:16.086813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.617 [2024-11-19 10:24:16.194185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.617 [2024-11-19 10:24:16.391166] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:02.617 [2024-11-19 10:24:16.391286] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.187 BaseBdev1_malloc 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.187 [2024-11-19 10:24:16.780943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:03.187 [2024-11-19 10:24:16.781073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.187 [2024-11-19 10:24:16.781100] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:03.187 [2024-11-19 10:24:16.781111] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.187 [2024-11-19 10:24:16.783155] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.187 [2024-11-19 10:24:16.783193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:03.187 BaseBdev1 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.187 BaseBdev2_malloc 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.187 [2024-11-19 10:24:16.834557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:03.187 [2024-11-19 10:24:16.834615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.187 [2024-11-19 10:24:16.834631] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:03.187 [2024-11-19 10:24:16.834643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.187 [2024-11-19 10:24:16.836682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.187 [2024-11-19 10:24:16.836720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:03.187 BaseBdev2 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.187 BaseBdev3_malloc 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.187 [2024-11-19 10:24:16.899881] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:03.187 [2024-11-19 10:24:16.899933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.187 [2024-11-19 10:24:16.899966] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:03.187 [2024-11-19 10:24:16.899976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.187 [2024-11-19 10:24:16.901930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.187 [2024-11-19 10:24:16.902018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:03.187 BaseBdev3 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.187 BaseBdev4_malloc 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.187 [2024-11-19 10:24:16.953610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:03.187 [2024-11-19 10:24:16.953700] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.187 [2024-11-19 10:24:16.953740] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:03.187 [2024-11-19 10:24:16.953751] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.187 [2024-11-19 10:24:16.955712] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.187 [2024-11-19 10:24:16.955753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:03.187 BaseBdev4 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.187 10:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.448 spare_malloc 00:13:03.448 10:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.448 10:24:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:03.448 10:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.448 10:24:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.448 spare_delay 00:13:03.448 10:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.448 10:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:03.448 10:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.448 10:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.448 [2024-11-19 10:24:17.017663] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:03.448 [2024-11-19 10:24:17.017758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.448 [2024-11-19 10:24:17.017796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:03.448 [2024-11-19 10:24:17.017807] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.448 [2024-11-19 10:24:17.019765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.448 [2024-11-19 10:24:17.019807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:03.448 spare 00:13:03.448 10:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.448 10:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:03.448 10:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.448 10:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.448 [2024-11-19 10:24:17.029687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:03.448 [2024-11-19 10:24:17.031417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:03.448 [2024-11-19 10:24:17.031480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:03.448 [2024-11-19 10:24:17.031530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:03.448 [2024-11-19 10:24:17.031602] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:03.448 [2024-11-19 10:24:17.031614] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:03.448 [2024-11-19 10:24:17.031841] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:03.448 [2024-11-19 10:24:17.032016] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:03.448 [2024-11-19 10:24:17.032029] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:03.448 [2024-11-19 10:24:17.032198] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.448 10:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.448 10:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:03.448 10:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.448 10:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.448 10:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.448 10:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.448 10:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:03.448 10:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.448 10:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.448 10:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.448 10:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.448 10:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.448 10:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.448 10:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.448 10:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.448 10:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.448 10:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.448 "name": "raid_bdev1", 00:13:03.448 "uuid": "556e4333-6e31-4d1d-92d2-f81335459118", 00:13:03.448 "strip_size_kb": 0, 00:13:03.448 "state": "online", 00:13:03.448 "raid_level": "raid1", 00:13:03.448 "superblock": false, 00:13:03.448 "num_base_bdevs": 4, 00:13:03.448 "num_base_bdevs_discovered": 4, 00:13:03.448 "num_base_bdevs_operational": 4, 00:13:03.448 "base_bdevs_list": [ 00:13:03.448 { 00:13:03.448 "name": "BaseBdev1", 00:13:03.448 "uuid": "d1d6fb91-9e09-5514-b20c-f00fe512c661", 00:13:03.448 "is_configured": true, 00:13:03.448 "data_offset": 0, 00:13:03.448 "data_size": 65536 00:13:03.448 }, 00:13:03.448 { 00:13:03.448 "name": "BaseBdev2", 00:13:03.448 "uuid": "3994cec8-6e26-5751-a137-7bb50512a20e", 00:13:03.448 "is_configured": true, 00:13:03.448 "data_offset": 0, 00:13:03.448 "data_size": 65536 00:13:03.448 }, 00:13:03.448 { 00:13:03.448 "name": "BaseBdev3", 00:13:03.448 "uuid": "f1a7a9ed-e6a7-5239-a97d-4533f8bee406", 00:13:03.448 "is_configured": true, 00:13:03.448 "data_offset": 0, 00:13:03.448 "data_size": 65536 00:13:03.448 }, 00:13:03.448 { 00:13:03.448 "name": "BaseBdev4", 00:13:03.448 "uuid": "9d856f2e-4a87-500c-81c3-640baf5a6d87", 00:13:03.448 "is_configured": true, 00:13:03.448 "data_offset": 0, 00:13:03.448 "data_size": 65536 00:13:03.448 } 00:13:03.448 ] 00:13:03.448 }' 00:13:03.448 10:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.448 10:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.708 10:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:03.708 10:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:03.708 10:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.708 10:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.708 [2024-11-19 10:24:17.469244] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:03.968 10:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.968 10:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:03.968 10:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.968 10:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.968 10:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:03.968 10:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.968 10:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.968 10:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:03.968 10:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:03.968 10:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:03.968 10:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:03.968 10:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:03.968 10:24:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:03.968 10:24:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:03.968 10:24:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:03.968 10:24:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:03.968 10:24:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:03.968 10:24:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:03.968 10:24:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:03.968 10:24:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:03.968 10:24:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:03.968 [2024-11-19 10:24:17.708536] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:03.968 /dev/nbd0 00:13:03.968 10:24:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:03.968 10:24:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:03.968 10:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:03.968 10:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:03.968 10:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:04.228 10:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:04.228 10:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:04.228 10:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:04.228 10:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:04.228 10:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:04.228 10:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:04.228 1+0 records in 00:13:04.228 1+0 records out 00:13:04.228 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375688 s, 10.9 MB/s 00:13:04.228 10:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.228 10:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:04.228 10:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:04.228 10:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:04.228 10:24:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:04.228 10:24:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:04.228 10:24:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:04.228 10:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:04.228 10:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:04.228 10:24:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:09.624 65536+0 records in 00:13:09.624 65536+0 records out 00:13:09.624 33554432 bytes (34 MB, 32 MiB) copied, 5.21031 s, 6.4 MB/s 00:13:09.624 10:24:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:09.624 10:24:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:09.624 10:24:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:09.624 10:24:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:09.624 10:24:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:09.624 10:24:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:09.624 10:24:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:09.624 [2024-11-19 10:24:23.186070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.624 10:24:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:09.624 10:24:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:09.624 10:24:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:09.624 10:24:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:09.624 10:24:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:09.624 10:24:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:09.624 10:24:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:09.624 10:24:23 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:09.624 10:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:09.624 10:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.624 10:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.624 [2024-11-19 10:24:23.222086] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:09.624 10:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.624 10:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:09.624 10:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.624 10:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.624 10:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.624 10:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.625 10:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:09.625 10:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.625 10:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.625 10:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.625 10:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.625 10:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.625 10:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.625 10:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.625 10:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.625 10:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.625 10:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.625 "name": "raid_bdev1", 00:13:09.625 "uuid": "556e4333-6e31-4d1d-92d2-f81335459118", 00:13:09.625 "strip_size_kb": 0, 00:13:09.625 "state": "online", 00:13:09.625 "raid_level": "raid1", 00:13:09.625 "superblock": false, 00:13:09.625 "num_base_bdevs": 4, 00:13:09.625 "num_base_bdevs_discovered": 3, 00:13:09.625 "num_base_bdevs_operational": 3, 00:13:09.625 "base_bdevs_list": [ 00:13:09.625 { 00:13:09.625 "name": null, 00:13:09.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.625 "is_configured": false, 00:13:09.625 "data_offset": 0, 00:13:09.625 "data_size": 65536 00:13:09.625 }, 00:13:09.625 { 00:13:09.625 "name": "BaseBdev2", 00:13:09.625 "uuid": "3994cec8-6e26-5751-a137-7bb50512a20e", 00:13:09.625 "is_configured": true, 00:13:09.625 "data_offset": 0, 00:13:09.625 "data_size": 65536 00:13:09.625 }, 00:13:09.625 { 00:13:09.625 "name": "BaseBdev3", 00:13:09.625 "uuid": "f1a7a9ed-e6a7-5239-a97d-4533f8bee406", 00:13:09.625 "is_configured": true, 00:13:09.625 "data_offset": 0, 00:13:09.625 "data_size": 65536 00:13:09.625 }, 00:13:09.625 { 00:13:09.625 "name": "BaseBdev4", 00:13:09.625 "uuid": "9d856f2e-4a87-500c-81c3-640baf5a6d87", 00:13:09.625 "is_configured": true, 00:13:09.625 "data_offset": 0, 00:13:09.625 "data_size": 65536 00:13:09.625 } 00:13:09.625 ] 00:13:09.625 }' 00:13:09.625 10:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.625 10:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.890 10:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:09.890 10:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.890 10:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.890 [2024-11-19 10:24:23.649367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:09.890 [2024-11-19 10:24:23.664222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:13:09.890 10:24:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.890 10:24:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:09.890 [2024-11-19 10:24:23.666074] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.270 "name": "raid_bdev1", 00:13:11.270 "uuid": "556e4333-6e31-4d1d-92d2-f81335459118", 00:13:11.270 "strip_size_kb": 0, 00:13:11.270 "state": "online", 00:13:11.270 "raid_level": "raid1", 00:13:11.270 "superblock": false, 00:13:11.270 "num_base_bdevs": 4, 00:13:11.270 "num_base_bdevs_discovered": 4, 00:13:11.270 "num_base_bdevs_operational": 4, 00:13:11.270 "process": { 00:13:11.270 "type": "rebuild", 00:13:11.270 "target": "spare", 00:13:11.270 "progress": { 00:13:11.270 "blocks": 20480, 00:13:11.270 "percent": 31 00:13:11.270 } 00:13:11.270 }, 00:13:11.270 "base_bdevs_list": [ 00:13:11.270 { 00:13:11.270 "name": "spare", 00:13:11.270 "uuid": "ae90002c-aebe-5cc1-b299-fe04e375d918", 00:13:11.270 "is_configured": true, 00:13:11.270 "data_offset": 0, 00:13:11.270 "data_size": 65536 00:13:11.270 }, 00:13:11.270 { 00:13:11.270 "name": "BaseBdev2", 00:13:11.270 "uuid": "3994cec8-6e26-5751-a137-7bb50512a20e", 00:13:11.270 "is_configured": true, 00:13:11.270 "data_offset": 0, 00:13:11.270 "data_size": 65536 00:13:11.270 }, 00:13:11.270 { 00:13:11.270 "name": "BaseBdev3", 00:13:11.270 "uuid": "f1a7a9ed-e6a7-5239-a97d-4533f8bee406", 00:13:11.270 "is_configured": true, 00:13:11.270 "data_offset": 0, 00:13:11.270 "data_size": 65536 00:13:11.270 }, 00:13:11.270 { 00:13:11.270 "name": "BaseBdev4", 00:13:11.270 "uuid": "9d856f2e-4a87-500c-81c3-640baf5a6d87", 00:13:11.270 "is_configured": true, 00:13:11.270 "data_offset": 0, 00:13:11.270 "data_size": 65536 00:13:11.270 } 00:13:11.270 ] 00:13:11.270 }' 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.270 [2024-11-19 10:24:24.805544] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:11.270 [2024-11-19 10:24:24.870970] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:11.270 [2024-11-19 10:24:24.871101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.270 [2024-11-19 10:24:24.871136] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:11.270 [2024-11-19 10:24:24.871159] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.270 "name": "raid_bdev1", 00:13:11.270 "uuid": "556e4333-6e31-4d1d-92d2-f81335459118", 00:13:11.270 "strip_size_kb": 0, 00:13:11.270 "state": "online", 00:13:11.270 "raid_level": "raid1", 00:13:11.270 "superblock": false, 00:13:11.270 "num_base_bdevs": 4, 00:13:11.270 "num_base_bdevs_discovered": 3, 00:13:11.270 "num_base_bdevs_operational": 3, 00:13:11.270 "base_bdevs_list": [ 00:13:11.270 { 00:13:11.270 "name": null, 00:13:11.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.270 "is_configured": false, 00:13:11.270 "data_offset": 0, 00:13:11.270 "data_size": 65536 00:13:11.270 }, 00:13:11.270 { 00:13:11.270 "name": "BaseBdev2", 00:13:11.270 "uuid": "3994cec8-6e26-5751-a137-7bb50512a20e", 00:13:11.270 "is_configured": true, 00:13:11.270 "data_offset": 0, 00:13:11.270 "data_size": 65536 00:13:11.270 }, 00:13:11.270 { 00:13:11.270 "name": "BaseBdev3", 00:13:11.270 "uuid": "f1a7a9ed-e6a7-5239-a97d-4533f8bee406", 00:13:11.270 "is_configured": true, 00:13:11.270 "data_offset": 0, 00:13:11.270 "data_size": 65536 00:13:11.270 }, 00:13:11.270 { 00:13:11.270 "name": "BaseBdev4", 00:13:11.270 "uuid": "9d856f2e-4a87-500c-81c3-640baf5a6d87", 00:13:11.270 "is_configured": true, 00:13:11.270 "data_offset": 0, 00:13:11.270 "data_size": 65536 00:13:11.270 } 00:13:11.270 ] 00:13:11.270 }' 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.270 10:24:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.531 10:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:11.531 10:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.531 10:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:11.531 10:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:11.531 10:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.531 10:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.531 10:24:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.531 10:24:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.791 10:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.791 10:24:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.791 10:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.791 "name": "raid_bdev1", 00:13:11.791 "uuid": "556e4333-6e31-4d1d-92d2-f81335459118", 00:13:11.791 "strip_size_kb": 0, 00:13:11.791 "state": "online", 00:13:11.791 "raid_level": "raid1", 00:13:11.791 "superblock": false, 00:13:11.791 "num_base_bdevs": 4, 00:13:11.791 "num_base_bdevs_discovered": 3, 00:13:11.791 "num_base_bdevs_operational": 3, 00:13:11.791 "base_bdevs_list": [ 00:13:11.791 { 00:13:11.791 "name": null, 00:13:11.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.791 "is_configured": false, 00:13:11.791 "data_offset": 0, 00:13:11.791 "data_size": 65536 00:13:11.791 }, 00:13:11.791 { 00:13:11.791 "name": "BaseBdev2", 00:13:11.791 "uuid": "3994cec8-6e26-5751-a137-7bb50512a20e", 00:13:11.791 "is_configured": true, 00:13:11.791 "data_offset": 0, 00:13:11.791 "data_size": 65536 00:13:11.791 }, 00:13:11.791 { 00:13:11.791 "name": "BaseBdev3", 00:13:11.791 "uuid": "f1a7a9ed-e6a7-5239-a97d-4533f8bee406", 00:13:11.791 "is_configured": true, 00:13:11.791 "data_offset": 0, 00:13:11.791 "data_size": 65536 00:13:11.791 }, 00:13:11.791 { 00:13:11.791 "name": "BaseBdev4", 00:13:11.791 "uuid": "9d856f2e-4a87-500c-81c3-640baf5a6d87", 00:13:11.791 "is_configured": true, 00:13:11.791 "data_offset": 0, 00:13:11.791 "data_size": 65536 00:13:11.791 } 00:13:11.791 ] 00:13:11.791 }' 00:13:11.791 10:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.791 10:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:11.791 10:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.791 10:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:11.791 10:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:11.791 10:24:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.791 10:24:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.791 [2024-11-19 10:24:25.442853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:11.791 [2024-11-19 10:24:25.456700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:13:11.791 10:24:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.791 10:24:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:11.791 [2024-11-19 10:24:25.458500] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:12.730 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:12.730 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.730 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:12.730 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:12.730 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.730 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.730 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.730 10:24:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.730 10:24:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.730 10:24:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.990 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.990 "name": "raid_bdev1", 00:13:12.990 "uuid": "556e4333-6e31-4d1d-92d2-f81335459118", 00:13:12.990 "strip_size_kb": 0, 00:13:12.990 "state": "online", 00:13:12.990 "raid_level": "raid1", 00:13:12.990 "superblock": false, 00:13:12.990 "num_base_bdevs": 4, 00:13:12.990 "num_base_bdevs_discovered": 4, 00:13:12.990 "num_base_bdevs_operational": 4, 00:13:12.990 "process": { 00:13:12.990 "type": "rebuild", 00:13:12.990 "target": "spare", 00:13:12.990 "progress": { 00:13:12.990 "blocks": 20480, 00:13:12.990 "percent": 31 00:13:12.990 } 00:13:12.990 }, 00:13:12.990 "base_bdevs_list": [ 00:13:12.990 { 00:13:12.990 "name": "spare", 00:13:12.990 "uuid": "ae90002c-aebe-5cc1-b299-fe04e375d918", 00:13:12.990 "is_configured": true, 00:13:12.990 "data_offset": 0, 00:13:12.990 "data_size": 65536 00:13:12.990 }, 00:13:12.990 { 00:13:12.990 "name": "BaseBdev2", 00:13:12.990 "uuid": "3994cec8-6e26-5751-a137-7bb50512a20e", 00:13:12.990 "is_configured": true, 00:13:12.990 "data_offset": 0, 00:13:12.990 "data_size": 65536 00:13:12.990 }, 00:13:12.990 { 00:13:12.990 "name": "BaseBdev3", 00:13:12.990 "uuid": "f1a7a9ed-e6a7-5239-a97d-4533f8bee406", 00:13:12.990 "is_configured": true, 00:13:12.990 "data_offset": 0, 00:13:12.990 "data_size": 65536 00:13:12.990 }, 00:13:12.990 { 00:13:12.990 "name": "BaseBdev4", 00:13:12.990 "uuid": "9d856f2e-4a87-500c-81c3-640baf5a6d87", 00:13:12.990 "is_configured": true, 00:13:12.990 "data_offset": 0, 00:13:12.990 "data_size": 65536 00:13:12.990 } 00:13:12.990 ] 00:13:12.990 }' 00:13:12.990 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.990 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:12.990 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.990 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:12.990 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:12.990 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:12.990 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:12.990 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:12.990 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:12.990 10:24:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.990 10:24:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.990 [2024-11-19 10:24:26.618827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:12.990 [2024-11-19 10:24:26.663251] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:13:12.990 10:24:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.990 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:12.990 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:12.990 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:12.990 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.990 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:12.990 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:12.990 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.990 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.990 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.990 10:24:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.990 10:24:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.990 10:24:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.990 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.990 "name": "raid_bdev1", 00:13:12.990 "uuid": "556e4333-6e31-4d1d-92d2-f81335459118", 00:13:12.990 "strip_size_kb": 0, 00:13:12.990 "state": "online", 00:13:12.990 "raid_level": "raid1", 00:13:12.990 "superblock": false, 00:13:12.990 "num_base_bdevs": 4, 00:13:12.990 "num_base_bdevs_discovered": 3, 00:13:12.990 "num_base_bdevs_operational": 3, 00:13:12.990 "process": { 00:13:12.990 "type": "rebuild", 00:13:12.990 "target": "spare", 00:13:12.990 "progress": { 00:13:12.990 "blocks": 24576, 00:13:12.990 "percent": 37 00:13:12.990 } 00:13:12.990 }, 00:13:12.990 "base_bdevs_list": [ 00:13:12.990 { 00:13:12.990 "name": "spare", 00:13:12.990 "uuid": "ae90002c-aebe-5cc1-b299-fe04e375d918", 00:13:12.990 "is_configured": true, 00:13:12.990 "data_offset": 0, 00:13:12.990 "data_size": 65536 00:13:12.990 }, 00:13:12.990 { 00:13:12.990 "name": null, 00:13:12.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:12.990 "is_configured": false, 00:13:12.990 "data_offset": 0, 00:13:12.990 "data_size": 65536 00:13:12.990 }, 00:13:12.990 { 00:13:12.990 "name": "BaseBdev3", 00:13:12.990 "uuid": "f1a7a9ed-e6a7-5239-a97d-4533f8bee406", 00:13:12.990 "is_configured": true, 00:13:12.990 "data_offset": 0, 00:13:12.990 "data_size": 65536 00:13:12.990 }, 00:13:12.990 { 00:13:12.990 "name": "BaseBdev4", 00:13:12.990 "uuid": "9d856f2e-4a87-500c-81c3-640baf5a6d87", 00:13:12.990 "is_configured": true, 00:13:12.990 "data_offset": 0, 00:13:12.990 "data_size": 65536 00:13:12.990 } 00:13:12.990 ] 00:13:12.990 }' 00:13:12.990 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.990 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:13.250 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.250 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:13.250 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=431 00:13:13.250 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:13.250 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:13.250 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.250 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:13.250 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:13.250 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.250 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.250 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.250 10:24:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.250 10:24:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.250 10:24:26 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.250 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.250 "name": "raid_bdev1", 00:13:13.250 "uuid": "556e4333-6e31-4d1d-92d2-f81335459118", 00:13:13.250 "strip_size_kb": 0, 00:13:13.250 "state": "online", 00:13:13.250 "raid_level": "raid1", 00:13:13.250 "superblock": false, 00:13:13.250 "num_base_bdevs": 4, 00:13:13.250 "num_base_bdevs_discovered": 3, 00:13:13.250 "num_base_bdevs_operational": 3, 00:13:13.250 "process": { 00:13:13.250 "type": "rebuild", 00:13:13.250 "target": "spare", 00:13:13.250 "progress": { 00:13:13.250 "blocks": 26624, 00:13:13.250 "percent": 40 00:13:13.250 } 00:13:13.250 }, 00:13:13.250 "base_bdevs_list": [ 00:13:13.250 { 00:13:13.250 "name": "spare", 00:13:13.250 "uuid": "ae90002c-aebe-5cc1-b299-fe04e375d918", 00:13:13.250 "is_configured": true, 00:13:13.250 "data_offset": 0, 00:13:13.250 "data_size": 65536 00:13:13.250 }, 00:13:13.250 { 00:13:13.250 "name": null, 00:13:13.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.250 "is_configured": false, 00:13:13.250 "data_offset": 0, 00:13:13.250 "data_size": 65536 00:13:13.250 }, 00:13:13.250 { 00:13:13.250 "name": "BaseBdev3", 00:13:13.250 "uuid": "f1a7a9ed-e6a7-5239-a97d-4533f8bee406", 00:13:13.250 "is_configured": true, 00:13:13.250 "data_offset": 0, 00:13:13.250 "data_size": 65536 00:13:13.250 }, 00:13:13.250 { 00:13:13.250 "name": "BaseBdev4", 00:13:13.250 "uuid": "9d856f2e-4a87-500c-81c3-640baf5a6d87", 00:13:13.250 "is_configured": true, 00:13:13.250 "data_offset": 0, 00:13:13.250 "data_size": 65536 00:13:13.250 } 00:13:13.250 ] 00:13:13.250 }' 00:13:13.250 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.250 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:13.250 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.250 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:13.250 10:24:26 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:14.191 10:24:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:14.191 10:24:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:14.191 10:24:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.191 10:24:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:14.191 10:24:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:14.191 10:24:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.191 10:24:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.191 10:24:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.191 10:24:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:14.191 10:24:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.450 10:24:27 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:14.450 10:24:27 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:14.450 "name": "raid_bdev1", 00:13:14.450 "uuid": "556e4333-6e31-4d1d-92d2-f81335459118", 00:13:14.450 "strip_size_kb": 0, 00:13:14.450 "state": "online", 00:13:14.450 "raid_level": "raid1", 00:13:14.450 "superblock": false, 00:13:14.450 "num_base_bdevs": 4, 00:13:14.450 "num_base_bdevs_discovered": 3, 00:13:14.450 "num_base_bdevs_operational": 3, 00:13:14.450 "process": { 00:13:14.450 "type": "rebuild", 00:13:14.450 "target": "spare", 00:13:14.450 "progress": { 00:13:14.450 "blocks": 49152, 00:13:14.450 "percent": 75 00:13:14.450 } 00:13:14.450 }, 00:13:14.450 "base_bdevs_list": [ 00:13:14.450 { 00:13:14.450 "name": "spare", 00:13:14.450 "uuid": "ae90002c-aebe-5cc1-b299-fe04e375d918", 00:13:14.450 "is_configured": true, 00:13:14.450 "data_offset": 0, 00:13:14.450 "data_size": 65536 00:13:14.450 }, 00:13:14.450 { 00:13:14.450 "name": null, 00:13:14.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.450 "is_configured": false, 00:13:14.450 "data_offset": 0, 00:13:14.450 "data_size": 65536 00:13:14.450 }, 00:13:14.450 { 00:13:14.451 "name": "BaseBdev3", 00:13:14.451 "uuid": "f1a7a9ed-e6a7-5239-a97d-4533f8bee406", 00:13:14.451 "is_configured": true, 00:13:14.451 "data_offset": 0, 00:13:14.451 "data_size": 65536 00:13:14.451 }, 00:13:14.451 { 00:13:14.451 "name": "BaseBdev4", 00:13:14.451 "uuid": "9d856f2e-4a87-500c-81c3-640baf5a6d87", 00:13:14.451 "is_configured": true, 00:13:14.451 "data_offset": 0, 00:13:14.451 "data_size": 65536 00:13:14.451 } 00:13:14.451 ] 00:13:14.451 }' 00:13:14.451 10:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:14.451 10:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:14.451 10:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.451 10:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:14.451 10:24:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:15.021 [2024-11-19 10:24:28.671639] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:15.021 [2024-11-19 10:24:28.671818] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:15.021 [2024-11-19 10:24:28.671890] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:15.590 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:15.590 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:15.590 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.590 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:15.590 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:15.590 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.590 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.590 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.590 10:24:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.590 10:24:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.590 10:24:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.590 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.590 "name": "raid_bdev1", 00:13:15.590 "uuid": "556e4333-6e31-4d1d-92d2-f81335459118", 00:13:15.590 "strip_size_kb": 0, 00:13:15.590 "state": "online", 00:13:15.590 "raid_level": "raid1", 00:13:15.590 "superblock": false, 00:13:15.590 "num_base_bdevs": 4, 00:13:15.590 "num_base_bdevs_discovered": 3, 00:13:15.590 "num_base_bdevs_operational": 3, 00:13:15.590 "base_bdevs_list": [ 00:13:15.590 { 00:13:15.590 "name": "spare", 00:13:15.590 "uuid": "ae90002c-aebe-5cc1-b299-fe04e375d918", 00:13:15.590 "is_configured": true, 00:13:15.590 "data_offset": 0, 00:13:15.590 "data_size": 65536 00:13:15.590 }, 00:13:15.590 { 00:13:15.590 "name": null, 00:13:15.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.590 "is_configured": false, 00:13:15.590 "data_offset": 0, 00:13:15.590 "data_size": 65536 00:13:15.590 }, 00:13:15.590 { 00:13:15.590 "name": "BaseBdev3", 00:13:15.590 "uuid": "f1a7a9ed-e6a7-5239-a97d-4533f8bee406", 00:13:15.590 "is_configured": true, 00:13:15.590 "data_offset": 0, 00:13:15.590 "data_size": 65536 00:13:15.590 }, 00:13:15.590 { 00:13:15.590 "name": "BaseBdev4", 00:13:15.590 "uuid": "9d856f2e-4a87-500c-81c3-640baf5a6d87", 00:13:15.590 "is_configured": true, 00:13:15.590 "data_offset": 0, 00:13:15.590 "data_size": 65536 00:13:15.590 } 00:13:15.590 ] 00:13:15.590 }' 00:13:15.590 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.590 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:15.590 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.590 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:15.590 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:15.590 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:15.590 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.590 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:15.590 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:15.590 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.590 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.590 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.590 10:24:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.590 10:24:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.590 10:24:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.590 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.590 "name": "raid_bdev1", 00:13:15.590 "uuid": "556e4333-6e31-4d1d-92d2-f81335459118", 00:13:15.590 "strip_size_kb": 0, 00:13:15.590 "state": "online", 00:13:15.590 "raid_level": "raid1", 00:13:15.590 "superblock": false, 00:13:15.590 "num_base_bdevs": 4, 00:13:15.590 "num_base_bdevs_discovered": 3, 00:13:15.590 "num_base_bdevs_operational": 3, 00:13:15.590 "base_bdevs_list": [ 00:13:15.590 { 00:13:15.590 "name": "spare", 00:13:15.590 "uuid": "ae90002c-aebe-5cc1-b299-fe04e375d918", 00:13:15.590 "is_configured": true, 00:13:15.590 "data_offset": 0, 00:13:15.590 "data_size": 65536 00:13:15.590 }, 00:13:15.590 { 00:13:15.590 "name": null, 00:13:15.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.590 "is_configured": false, 00:13:15.590 "data_offset": 0, 00:13:15.590 "data_size": 65536 00:13:15.590 }, 00:13:15.590 { 00:13:15.590 "name": "BaseBdev3", 00:13:15.590 "uuid": "f1a7a9ed-e6a7-5239-a97d-4533f8bee406", 00:13:15.590 "is_configured": true, 00:13:15.590 "data_offset": 0, 00:13:15.590 "data_size": 65536 00:13:15.590 }, 00:13:15.590 { 00:13:15.590 "name": "BaseBdev4", 00:13:15.590 "uuid": "9d856f2e-4a87-500c-81c3-640baf5a6d87", 00:13:15.590 "is_configured": true, 00:13:15.590 "data_offset": 0, 00:13:15.590 "data_size": 65536 00:13:15.590 } 00:13:15.590 ] 00:13:15.590 }' 00:13:15.590 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.590 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:15.590 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.849 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:15.849 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:15.849 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.849 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.849 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.849 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.849 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:15.849 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.849 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.849 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.849 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.849 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.849 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.849 10:24:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.849 10:24:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.849 10:24:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.849 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.849 "name": "raid_bdev1", 00:13:15.849 "uuid": "556e4333-6e31-4d1d-92d2-f81335459118", 00:13:15.849 "strip_size_kb": 0, 00:13:15.849 "state": "online", 00:13:15.849 "raid_level": "raid1", 00:13:15.849 "superblock": false, 00:13:15.849 "num_base_bdevs": 4, 00:13:15.849 "num_base_bdevs_discovered": 3, 00:13:15.849 "num_base_bdevs_operational": 3, 00:13:15.849 "base_bdevs_list": [ 00:13:15.849 { 00:13:15.849 "name": "spare", 00:13:15.849 "uuid": "ae90002c-aebe-5cc1-b299-fe04e375d918", 00:13:15.849 "is_configured": true, 00:13:15.849 "data_offset": 0, 00:13:15.849 "data_size": 65536 00:13:15.849 }, 00:13:15.849 { 00:13:15.849 "name": null, 00:13:15.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.849 "is_configured": false, 00:13:15.849 "data_offset": 0, 00:13:15.849 "data_size": 65536 00:13:15.849 }, 00:13:15.849 { 00:13:15.849 "name": "BaseBdev3", 00:13:15.849 "uuid": "f1a7a9ed-e6a7-5239-a97d-4533f8bee406", 00:13:15.849 "is_configured": true, 00:13:15.849 "data_offset": 0, 00:13:15.849 "data_size": 65536 00:13:15.849 }, 00:13:15.849 { 00:13:15.849 "name": "BaseBdev4", 00:13:15.849 "uuid": "9d856f2e-4a87-500c-81c3-640baf5a6d87", 00:13:15.849 "is_configured": true, 00:13:15.849 "data_offset": 0, 00:13:15.849 "data_size": 65536 00:13:15.849 } 00:13:15.849 ] 00:13:15.849 }' 00:13:15.849 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.849 10:24:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.108 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:16.108 10:24:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.108 10:24:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.108 [2024-11-19 10:24:29.842756] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:16.108 [2024-11-19 10:24:29.842787] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:16.108 [2024-11-19 10:24:29.842864] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:16.108 [2024-11-19 10:24:29.842943] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:16.108 [2024-11-19 10:24:29.842953] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:16.108 10:24:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.108 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.108 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:16.108 10:24:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.108 10:24:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.108 10:24:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.367 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:16.367 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:16.367 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:16.367 10:24:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:16.367 10:24:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:16.367 10:24:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:16.367 10:24:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:16.367 10:24:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:16.367 10:24:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:16.367 10:24:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:16.367 10:24:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:16.367 10:24:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:16.367 10:24:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:16.367 /dev/nbd0 00:13:16.367 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:16.367 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:16.367 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:16.367 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:16.367 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:16.367 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:16.367 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:16.367 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:16.367 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:16.367 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:16.367 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:16.367 1+0 records in 00:13:16.367 1+0 records out 00:13:16.367 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402973 s, 10.2 MB/s 00:13:16.367 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:16.367 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:16.367 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:16.368 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:16.368 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:16.368 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:16.368 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:16.368 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:16.627 /dev/nbd1 00:13:16.627 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:16.627 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:16.627 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:16.627 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:16.627 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:16.627 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:16.627 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:16.627 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:16.627 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:16.627 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:16.627 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:16.627 1+0 records in 00:13:16.627 1+0 records out 00:13:16.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000556041 s, 7.4 MB/s 00:13:16.627 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:16.627 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:16.627 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:16.627 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:16.627 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:16.627 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:16.627 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:16.627 10:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:16.887 10:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:16.887 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:16.887 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:16.887 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:16.887 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:16.887 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:16.887 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:17.147 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:17.147 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:17.147 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:17.148 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:17.148 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:17.148 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:17.148 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:17.148 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:17.148 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:17.148 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:17.407 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:17.407 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:17.407 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:17.407 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:17.407 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:17.407 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:17.407 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:17.407 10:24:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:17.407 10:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:17.407 10:24:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77277 00:13:17.407 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77277 ']' 00:13:17.407 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77277 00:13:17.407 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:17.407 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:17.407 10:24:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77277 00:13:17.407 killing process with pid 77277 00:13:17.407 Received shutdown signal, test time was about 60.000000 seconds 00:13:17.407 00:13:17.407 Latency(us) 00:13:17.407 [2024-11-19T10:24:31.188Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:17.407 [2024-11-19T10:24:31.188Z] =================================================================================================================== 00:13:17.407 [2024-11-19T10:24:31.188Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:17.407 10:24:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:17.407 10:24:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:17.407 10:24:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77277' 00:13:17.407 10:24:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77277 00:13:17.407 [2024-11-19 10:24:31.004269] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:17.407 10:24:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77277 00:13:17.976 [2024-11-19 10:24:31.470346] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:18.917 00:13:18.917 real 0m16.700s 00:13:18.917 user 0m18.774s 00:13:18.917 sys 0m2.891s 00:13:18.917 ************************************ 00:13:18.917 END TEST raid_rebuild_test 00:13:18.917 ************************************ 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.917 10:24:32 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:13:18.917 10:24:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:18.917 10:24:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.917 10:24:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:18.917 ************************************ 00:13:18.917 START TEST raid_rebuild_test_sb 00:13:18.917 ************************************ 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=77718 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 77718 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77718 ']' 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:18.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:18.917 10:24:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.917 [2024-11-19 10:24:32.689944] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:13:18.917 [2024-11-19 10:24:32.690158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:18.917 Zero copy mechanism will not be used. 00:13:18.917 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77718 ] 00:13:19.177 [2024-11-19 10:24:32.860618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.437 [2024-11-19 10:24:32.964092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.437 [2024-11-19 10:24:33.146096] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:19.437 [2024-11-19 10:24:33.146221] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.007 BaseBdev1_malloc 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.007 [2024-11-19 10:24:33.569167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:20.007 [2024-11-19 10:24:33.569247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.007 [2024-11-19 10:24:33.569269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:20.007 [2024-11-19 10:24:33.569291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.007 [2024-11-19 10:24:33.571295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.007 [2024-11-19 10:24:33.571407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:20.007 BaseBdev1 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.007 BaseBdev2_malloc 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.007 [2024-11-19 10:24:33.622689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:20.007 [2024-11-19 10:24:33.622742] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.007 [2024-11-19 10:24:33.622760] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:20.007 [2024-11-19 10:24:33.622772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.007 [2024-11-19 10:24:33.624730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.007 [2024-11-19 10:24:33.624770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:20.007 BaseBdev2 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.007 BaseBdev3_malloc 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.007 [2024-11-19 10:24:33.687144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:20.007 [2024-11-19 10:24:33.687194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.007 [2024-11-19 10:24:33.687212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:20.007 [2024-11-19 10:24:33.687223] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.007 [2024-11-19 10:24:33.689227] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.007 [2024-11-19 10:24:33.689265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:20.007 BaseBdev3 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.007 BaseBdev4_malloc 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.007 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.007 [2024-11-19 10:24:33.741301] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:20.007 [2024-11-19 10:24:33.741351] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.007 [2024-11-19 10:24:33.741383] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:20.008 [2024-11-19 10:24:33.741393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.008 [2024-11-19 10:24:33.743389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.008 [2024-11-19 10:24:33.743465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:20.008 BaseBdev4 00:13:20.008 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.008 10:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:20.008 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.008 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.268 spare_malloc 00:13:20.268 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.268 10:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:20.268 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.268 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.268 spare_delay 00:13:20.268 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.268 10:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:20.268 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.268 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.268 [2024-11-19 10:24:33.807435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:20.268 [2024-11-19 10:24:33.807488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.268 [2024-11-19 10:24:33.807522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:20.268 [2024-11-19 10:24:33.807532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.268 [2024-11-19 10:24:33.809471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.268 [2024-11-19 10:24:33.809566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:20.268 spare 00:13:20.268 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.268 10:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:20.268 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.268 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.268 [2024-11-19 10:24:33.819482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:20.268 [2024-11-19 10:24:33.821204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:20.268 [2024-11-19 10:24:33.821271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:20.268 [2024-11-19 10:24:33.821321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:20.268 [2024-11-19 10:24:33.821498] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:20.268 [2024-11-19 10:24:33.821527] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:20.268 [2024-11-19 10:24:33.821743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:20.268 [2024-11-19 10:24:33.821900] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:20.268 [2024-11-19 10:24:33.821910] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:20.268 [2024-11-19 10:24:33.822051] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.268 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.268 10:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:20.268 10:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.268 10:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.268 10:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.268 10:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.268 10:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:20.268 10:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.268 10:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.268 10:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.268 10:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.268 10:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.268 10:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.268 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.268 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.268 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.268 10:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.268 "name": "raid_bdev1", 00:13:20.268 "uuid": "08b365bd-b440-4081-b35f-6e05f8605fc7", 00:13:20.268 "strip_size_kb": 0, 00:13:20.268 "state": "online", 00:13:20.268 "raid_level": "raid1", 00:13:20.268 "superblock": true, 00:13:20.268 "num_base_bdevs": 4, 00:13:20.268 "num_base_bdevs_discovered": 4, 00:13:20.268 "num_base_bdevs_operational": 4, 00:13:20.268 "base_bdevs_list": [ 00:13:20.268 { 00:13:20.268 "name": "BaseBdev1", 00:13:20.268 "uuid": "c6a047a6-49ee-595c-8063-b9bde9c780a0", 00:13:20.268 "is_configured": true, 00:13:20.268 "data_offset": 2048, 00:13:20.268 "data_size": 63488 00:13:20.268 }, 00:13:20.268 { 00:13:20.268 "name": "BaseBdev2", 00:13:20.268 "uuid": "479fde15-b9e0-570f-9e96-f951ca3a7e3d", 00:13:20.268 "is_configured": true, 00:13:20.268 "data_offset": 2048, 00:13:20.268 "data_size": 63488 00:13:20.268 }, 00:13:20.268 { 00:13:20.268 "name": "BaseBdev3", 00:13:20.268 "uuid": "cadc993c-2ee1-577a-9674-d585f1149304", 00:13:20.268 "is_configured": true, 00:13:20.268 "data_offset": 2048, 00:13:20.268 "data_size": 63488 00:13:20.268 }, 00:13:20.268 { 00:13:20.268 "name": "BaseBdev4", 00:13:20.268 "uuid": "18050023-21e5-59fe-94a7-f11cbdea9e69", 00:13:20.268 "is_configured": true, 00:13:20.268 "data_offset": 2048, 00:13:20.268 "data_size": 63488 00:13:20.268 } 00:13:20.268 ] 00:13:20.268 }' 00:13:20.268 10:24:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.268 10:24:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.528 10:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:20.528 10:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:20.528 10:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.528 10:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.528 [2024-11-19 10:24:34.271012] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:20.529 10:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.789 10:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:20.789 10:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.789 10:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:20.789 10:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.789 10:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.789 10:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.789 10:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:20.789 10:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:20.789 10:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:20.789 10:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:20.789 10:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:20.789 10:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:20.789 10:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:20.789 10:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:20.789 10:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:20.789 10:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:20.789 10:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:20.789 10:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:20.789 10:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:20.789 10:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:20.789 [2024-11-19 10:24:34.558241] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:21.049 /dev/nbd0 00:13:21.049 10:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:21.049 10:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:21.049 10:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:21.049 10:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:21.049 10:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:21.049 10:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:21.049 10:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:21.049 10:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:21.049 10:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:21.049 10:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:21.049 10:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:21.049 1+0 records in 00:13:21.049 1+0 records out 00:13:21.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494551 s, 8.3 MB/s 00:13:21.049 10:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.049 10:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:21.049 10:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:21.049 10:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:21.049 10:24:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:21.049 10:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:21.049 10:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:21.049 10:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:21.050 10:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:21.050 10:24:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:26.432 63488+0 records in 00:13:26.432 63488+0 records out 00:13:26.432 32505856 bytes (33 MB, 31 MiB) copied, 5.48236 s, 5.9 MB/s 00:13:26.432 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:26.432 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:26.432 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:26.432 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:26.432 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:26.432 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.432 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:26.692 [2024-11-19 10:24:40.307971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.692 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:26.692 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:26.692 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:26.692 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.692 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.692 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:26.692 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:26.692 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.692 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:26.692 10:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.692 10:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.692 [2024-11-19 10:24:40.336009] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:26.692 10:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.692 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:26.692 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.692 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.692 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.692 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.692 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:26.692 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.692 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.692 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.692 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.692 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.692 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.692 10:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.692 10:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.692 10:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.692 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.692 "name": "raid_bdev1", 00:13:26.692 "uuid": "08b365bd-b440-4081-b35f-6e05f8605fc7", 00:13:26.692 "strip_size_kb": 0, 00:13:26.692 "state": "online", 00:13:26.692 "raid_level": "raid1", 00:13:26.692 "superblock": true, 00:13:26.692 "num_base_bdevs": 4, 00:13:26.692 "num_base_bdevs_discovered": 3, 00:13:26.692 "num_base_bdevs_operational": 3, 00:13:26.692 "base_bdevs_list": [ 00:13:26.692 { 00:13:26.692 "name": null, 00:13:26.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.693 "is_configured": false, 00:13:26.693 "data_offset": 0, 00:13:26.693 "data_size": 63488 00:13:26.693 }, 00:13:26.693 { 00:13:26.693 "name": "BaseBdev2", 00:13:26.693 "uuid": "479fde15-b9e0-570f-9e96-f951ca3a7e3d", 00:13:26.693 "is_configured": true, 00:13:26.693 "data_offset": 2048, 00:13:26.693 "data_size": 63488 00:13:26.693 }, 00:13:26.693 { 00:13:26.693 "name": "BaseBdev3", 00:13:26.693 "uuid": "cadc993c-2ee1-577a-9674-d585f1149304", 00:13:26.693 "is_configured": true, 00:13:26.693 "data_offset": 2048, 00:13:26.693 "data_size": 63488 00:13:26.693 }, 00:13:26.693 { 00:13:26.693 "name": "BaseBdev4", 00:13:26.693 "uuid": "18050023-21e5-59fe-94a7-f11cbdea9e69", 00:13:26.693 "is_configured": true, 00:13:26.693 "data_offset": 2048, 00:13:26.693 "data_size": 63488 00:13:26.693 } 00:13:26.693 ] 00:13:26.693 }' 00:13:26.693 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.693 10:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.263 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:27.263 10:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.263 10:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.263 [2024-11-19 10:24:40.827158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:27.263 [2024-11-19 10:24:40.840321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:13:27.263 10:24:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.263 10:24:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:27.263 [2024-11-19 10:24:40.842196] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:28.202 10:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:28.202 10:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.202 10:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:28.202 10:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:28.202 10:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.202 10:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.202 10:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.202 10:24:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.202 10:24:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.202 10:24:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.202 10:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.202 "name": "raid_bdev1", 00:13:28.202 "uuid": "08b365bd-b440-4081-b35f-6e05f8605fc7", 00:13:28.202 "strip_size_kb": 0, 00:13:28.202 "state": "online", 00:13:28.202 "raid_level": "raid1", 00:13:28.202 "superblock": true, 00:13:28.202 "num_base_bdevs": 4, 00:13:28.202 "num_base_bdevs_discovered": 4, 00:13:28.202 "num_base_bdevs_operational": 4, 00:13:28.202 "process": { 00:13:28.202 "type": "rebuild", 00:13:28.202 "target": "spare", 00:13:28.202 "progress": { 00:13:28.202 "blocks": 20480, 00:13:28.202 "percent": 32 00:13:28.202 } 00:13:28.202 }, 00:13:28.202 "base_bdevs_list": [ 00:13:28.202 { 00:13:28.202 "name": "spare", 00:13:28.202 "uuid": "218216d2-6623-57f3-8a98-8f7a93aeada5", 00:13:28.202 "is_configured": true, 00:13:28.202 "data_offset": 2048, 00:13:28.202 "data_size": 63488 00:13:28.202 }, 00:13:28.202 { 00:13:28.202 "name": "BaseBdev2", 00:13:28.202 "uuid": "479fde15-b9e0-570f-9e96-f951ca3a7e3d", 00:13:28.202 "is_configured": true, 00:13:28.202 "data_offset": 2048, 00:13:28.202 "data_size": 63488 00:13:28.202 }, 00:13:28.202 { 00:13:28.202 "name": "BaseBdev3", 00:13:28.202 "uuid": "cadc993c-2ee1-577a-9674-d585f1149304", 00:13:28.202 "is_configured": true, 00:13:28.202 "data_offset": 2048, 00:13:28.202 "data_size": 63488 00:13:28.202 }, 00:13:28.202 { 00:13:28.202 "name": "BaseBdev4", 00:13:28.202 "uuid": "18050023-21e5-59fe-94a7-f11cbdea9e69", 00:13:28.202 "is_configured": true, 00:13:28.202 "data_offset": 2048, 00:13:28.202 "data_size": 63488 00:13:28.202 } 00:13:28.203 ] 00:13:28.203 }' 00:13:28.203 10:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.203 10:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:28.203 10:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.203 10:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:28.203 10:24:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:28.203 10:24:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.203 10:24:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.462 [2024-11-19 10:24:41.985310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:28.463 [2024-11-19 10:24:42.046707] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:28.463 [2024-11-19 10:24:42.046766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.463 [2024-11-19 10:24:42.046781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:28.463 [2024-11-19 10:24:42.046790] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:28.463 10:24:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.463 10:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:28.463 10:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.463 10:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.463 10:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.463 10:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.463 10:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:28.463 10:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.463 10:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.463 10:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.463 10:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.463 10:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.463 10:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.463 10:24:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.463 10:24:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.463 10:24:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.463 10:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.463 "name": "raid_bdev1", 00:13:28.463 "uuid": "08b365bd-b440-4081-b35f-6e05f8605fc7", 00:13:28.463 "strip_size_kb": 0, 00:13:28.463 "state": "online", 00:13:28.463 "raid_level": "raid1", 00:13:28.463 "superblock": true, 00:13:28.463 "num_base_bdevs": 4, 00:13:28.463 "num_base_bdevs_discovered": 3, 00:13:28.463 "num_base_bdevs_operational": 3, 00:13:28.463 "base_bdevs_list": [ 00:13:28.463 { 00:13:28.463 "name": null, 00:13:28.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.463 "is_configured": false, 00:13:28.463 "data_offset": 0, 00:13:28.463 "data_size": 63488 00:13:28.463 }, 00:13:28.463 { 00:13:28.463 "name": "BaseBdev2", 00:13:28.463 "uuid": "479fde15-b9e0-570f-9e96-f951ca3a7e3d", 00:13:28.463 "is_configured": true, 00:13:28.463 "data_offset": 2048, 00:13:28.463 "data_size": 63488 00:13:28.463 }, 00:13:28.463 { 00:13:28.463 "name": "BaseBdev3", 00:13:28.463 "uuid": "cadc993c-2ee1-577a-9674-d585f1149304", 00:13:28.463 "is_configured": true, 00:13:28.463 "data_offset": 2048, 00:13:28.463 "data_size": 63488 00:13:28.463 }, 00:13:28.463 { 00:13:28.463 "name": "BaseBdev4", 00:13:28.463 "uuid": "18050023-21e5-59fe-94a7-f11cbdea9e69", 00:13:28.463 "is_configured": true, 00:13:28.463 "data_offset": 2048, 00:13:28.463 "data_size": 63488 00:13:28.463 } 00:13:28.463 ] 00:13:28.463 }' 00:13:28.463 10:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.463 10:24:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.723 10:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:28.723 10:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.723 10:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:28.723 10:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:28.723 10:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.723 10:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.723 10:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.723 10:24:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.723 10:24:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.983 10:24:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.983 10:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.983 "name": "raid_bdev1", 00:13:28.983 "uuid": "08b365bd-b440-4081-b35f-6e05f8605fc7", 00:13:28.983 "strip_size_kb": 0, 00:13:28.983 "state": "online", 00:13:28.983 "raid_level": "raid1", 00:13:28.983 "superblock": true, 00:13:28.983 "num_base_bdevs": 4, 00:13:28.983 "num_base_bdevs_discovered": 3, 00:13:28.983 "num_base_bdevs_operational": 3, 00:13:28.983 "base_bdevs_list": [ 00:13:28.983 { 00:13:28.983 "name": null, 00:13:28.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.983 "is_configured": false, 00:13:28.983 "data_offset": 0, 00:13:28.983 "data_size": 63488 00:13:28.983 }, 00:13:28.983 { 00:13:28.983 "name": "BaseBdev2", 00:13:28.983 "uuid": "479fde15-b9e0-570f-9e96-f951ca3a7e3d", 00:13:28.983 "is_configured": true, 00:13:28.983 "data_offset": 2048, 00:13:28.983 "data_size": 63488 00:13:28.983 }, 00:13:28.983 { 00:13:28.983 "name": "BaseBdev3", 00:13:28.983 "uuid": "cadc993c-2ee1-577a-9674-d585f1149304", 00:13:28.983 "is_configured": true, 00:13:28.983 "data_offset": 2048, 00:13:28.983 "data_size": 63488 00:13:28.983 }, 00:13:28.983 { 00:13:28.983 "name": "BaseBdev4", 00:13:28.983 "uuid": "18050023-21e5-59fe-94a7-f11cbdea9e69", 00:13:28.983 "is_configured": true, 00:13:28.983 "data_offset": 2048, 00:13:28.983 "data_size": 63488 00:13:28.983 } 00:13:28.983 ] 00:13:28.983 }' 00:13:28.983 10:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.983 10:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:28.984 10:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.984 10:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:28.984 10:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:28.984 10:24:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.984 10:24:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.984 [2024-11-19 10:24:42.646757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:28.984 [2024-11-19 10:24:42.660701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:13:28.984 10:24:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.984 10:24:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:28.984 [2024-11-19 10:24:42.662581] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:29.923 10:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:29.923 10:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.923 10:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:29.923 10:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:29.923 10:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.923 10:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.923 10:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.923 10:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.923 10:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.923 10:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.183 10:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.183 "name": "raid_bdev1", 00:13:30.183 "uuid": "08b365bd-b440-4081-b35f-6e05f8605fc7", 00:13:30.183 "strip_size_kb": 0, 00:13:30.183 "state": "online", 00:13:30.183 "raid_level": "raid1", 00:13:30.183 "superblock": true, 00:13:30.183 "num_base_bdevs": 4, 00:13:30.183 "num_base_bdevs_discovered": 4, 00:13:30.183 "num_base_bdevs_operational": 4, 00:13:30.183 "process": { 00:13:30.183 "type": "rebuild", 00:13:30.183 "target": "spare", 00:13:30.183 "progress": { 00:13:30.183 "blocks": 20480, 00:13:30.183 "percent": 32 00:13:30.183 } 00:13:30.183 }, 00:13:30.183 "base_bdevs_list": [ 00:13:30.183 { 00:13:30.183 "name": "spare", 00:13:30.183 "uuid": "218216d2-6623-57f3-8a98-8f7a93aeada5", 00:13:30.183 "is_configured": true, 00:13:30.183 "data_offset": 2048, 00:13:30.183 "data_size": 63488 00:13:30.183 }, 00:13:30.183 { 00:13:30.183 "name": "BaseBdev2", 00:13:30.183 "uuid": "479fde15-b9e0-570f-9e96-f951ca3a7e3d", 00:13:30.183 "is_configured": true, 00:13:30.183 "data_offset": 2048, 00:13:30.183 "data_size": 63488 00:13:30.183 }, 00:13:30.183 { 00:13:30.183 "name": "BaseBdev3", 00:13:30.183 "uuid": "cadc993c-2ee1-577a-9674-d585f1149304", 00:13:30.183 "is_configured": true, 00:13:30.183 "data_offset": 2048, 00:13:30.183 "data_size": 63488 00:13:30.183 }, 00:13:30.183 { 00:13:30.183 "name": "BaseBdev4", 00:13:30.183 "uuid": "18050023-21e5-59fe-94a7-f11cbdea9e69", 00:13:30.183 "is_configured": true, 00:13:30.183 "data_offset": 2048, 00:13:30.183 "data_size": 63488 00:13:30.183 } 00:13:30.183 ] 00:13:30.183 }' 00:13:30.183 10:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.183 10:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:30.183 10:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.183 10:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:30.183 10:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:30.183 10:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:30.183 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:30.183 10:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:30.183 10:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:30.183 10:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:30.183 10:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:30.183 10:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.183 10:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.183 [2024-11-19 10:24:43.821685] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:30.443 [2024-11-19 10:24:43.966989] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:13:30.443 10:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.443 10:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:30.443 10:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:30.443 10:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:30.443 10:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.443 10:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:30.443 10:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:30.443 10:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.443 10:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.443 10:24:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.444 10:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.444 10:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.444 10:24:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.444 10:24:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.444 "name": "raid_bdev1", 00:13:30.444 "uuid": "08b365bd-b440-4081-b35f-6e05f8605fc7", 00:13:30.444 "strip_size_kb": 0, 00:13:30.444 "state": "online", 00:13:30.444 "raid_level": "raid1", 00:13:30.444 "superblock": true, 00:13:30.444 "num_base_bdevs": 4, 00:13:30.444 "num_base_bdevs_discovered": 3, 00:13:30.444 "num_base_bdevs_operational": 3, 00:13:30.444 "process": { 00:13:30.444 "type": "rebuild", 00:13:30.444 "target": "spare", 00:13:30.444 "progress": { 00:13:30.444 "blocks": 24576, 00:13:30.444 "percent": 38 00:13:30.444 } 00:13:30.444 }, 00:13:30.444 "base_bdevs_list": [ 00:13:30.444 { 00:13:30.444 "name": "spare", 00:13:30.444 "uuid": "218216d2-6623-57f3-8a98-8f7a93aeada5", 00:13:30.444 "is_configured": true, 00:13:30.444 "data_offset": 2048, 00:13:30.444 "data_size": 63488 00:13:30.444 }, 00:13:30.444 { 00:13:30.444 "name": null, 00:13:30.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.444 "is_configured": false, 00:13:30.444 "data_offset": 0, 00:13:30.444 "data_size": 63488 00:13:30.444 }, 00:13:30.444 { 00:13:30.444 "name": "BaseBdev3", 00:13:30.444 "uuid": "cadc993c-2ee1-577a-9674-d585f1149304", 00:13:30.444 "is_configured": true, 00:13:30.444 "data_offset": 2048, 00:13:30.444 "data_size": 63488 00:13:30.444 }, 00:13:30.444 { 00:13:30.444 "name": "BaseBdev4", 00:13:30.444 "uuid": "18050023-21e5-59fe-94a7-f11cbdea9e69", 00:13:30.444 "is_configured": true, 00:13:30.444 "data_offset": 2048, 00:13:30.444 "data_size": 63488 00:13:30.444 } 00:13:30.444 ] 00:13:30.444 }' 00:13:30.444 10:24:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.444 10:24:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:30.444 10:24:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.444 10:24:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:30.444 10:24:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=449 00:13:30.444 10:24:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:30.444 10:24:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:30.444 10:24:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.444 10:24:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:30.444 10:24:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:30.444 10:24:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.444 10:24:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.444 10:24:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.444 10:24:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.444 10:24:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.444 10:24:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.444 10:24:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.444 "name": "raid_bdev1", 00:13:30.444 "uuid": "08b365bd-b440-4081-b35f-6e05f8605fc7", 00:13:30.444 "strip_size_kb": 0, 00:13:30.444 "state": "online", 00:13:30.444 "raid_level": "raid1", 00:13:30.444 "superblock": true, 00:13:30.444 "num_base_bdevs": 4, 00:13:30.444 "num_base_bdevs_discovered": 3, 00:13:30.444 "num_base_bdevs_operational": 3, 00:13:30.444 "process": { 00:13:30.444 "type": "rebuild", 00:13:30.444 "target": "spare", 00:13:30.444 "progress": { 00:13:30.444 "blocks": 26624, 00:13:30.444 "percent": 41 00:13:30.444 } 00:13:30.444 }, 00:13:30.444 "base_bdevs_list": [ 00:13:30.444 { 00:13:30.444 "name": "spare", 00:13:30.444 "uuid": "218216d2-6623-57f3-8a98-8f7a93aeada5", 00:13:30.444 "is_configured": true, 00:13:30.444 "data_offset": 2048, 00:13:30.444 "data_size": 63488 00:13:30.444 }, 00:13:30.444 { 00:13:30.444 "name": null, 00:13:30.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.444 "is_configured": false, 00:13:30.444 "data_offset": 0, 00:13:30.444 "data_size": 63488 00:13:30.444 }, 00:13:30.444 { 00:13:30.444 "name": "BaseBdev3", 00:13:30.444 "uuid": "cadc993c-2ee1-577a-9674-d585f1149304", 00:13:30.444 "is_configured": true, 00:13:30.444 "data_offset": 2048, 00:13:30.444 "data_size": 63488 00:13:30.444 }, 00:13:30.444 { 00:13:30.444 "name": "BaseBdev4", 00:13:30.444 "uuid": "18050023-21e5-59fe-94a7-f11cbdea9e69", 00:13:30.444 "is_configured": true, 00:13:30.444 "data_offset": 2048, 00:13:30.444 "data_size": 63488 00:13:30.444 } 00:13:30.444 ] 00:13:30.444 }' 00:13:30.444 10:24:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.444 10:24:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:30.444 10:24:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.704 10:24:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:30.704 10:24:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:31.645 10:24:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:31.645 10:24:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:31.645 10:24:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.645 10:24:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:31.645 10:24:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:31.645 10:24:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.645 10:24:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.645 10:24:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.645 10:24:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.645 10:24:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.645 10:24:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.645 10:24:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.645 "name": "raid_bdev1", 00:13:31.645 "uuid": "08b365bd-b440-4081-b35f-6e05f8605fc7", 00:13:31.645 "strip_size_kb": 0, 00:13:31.645 "state": "online", 00:13:31.645 "raid_level": "raid1", 00:13:31.645 "superblock": true, 00:13:31.645 "num_base_bdevs": 4, 00:13:31.645 "num_base_bdevs_discovered": 3, 00:13:31.645 "num_base_bdevs_operational": 3, 00:13:31.645 "process": { 00:13:31.645 "type": "rebuild", 00:13:31.645 "target": "spare", 00:13:31.645 "progress": { 00:13:31.645 "blocks": 49152, 00:13:31.645 "percent": 77 00:13:31.645 } 00:13:31.645 }, 00:13:31.645 "base_bdevs_list": [ 00:13:31.645 { 00:13:31.645 "name": "spare", 00:13:31.645 "uuid": "218216d2-6623-57f3-8a98-8f7a93aeada5", 00:13:31.645 "is_configured": true, 00:13:31.645 "data_offset": 2048, 00:13:31.645 "data_size": 63488 00:13:31.645 }, 00:13:31.645 { 00:13:31.645 "name": null, 00:13:31.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.645 "is_configured": false, 00:13:31.645 "data_offset": 0, 00:13:31.645 "data_size": 63488 00:13:31.645 }, 00:13:31.645 { 00:13:31.645 "name": "BaseBdev3", 00:13:31.645 "uuid": "cadc993c-2ee1-577a-9674-d585f1149304", 00:13:31.645 "is_configured": true, 00:13:31.645 "data_offset": 2048, 00:13:31.645 "data_size": 63488 00:13:31.645 }, 00:13:31.645 { 00:13:31.645 "name": "BaseBdev4", 00:13:31.645 "uuid": "18050023-21e5-59fe-94a7-f11cbdea9e69", 00:13:31.645 "is_configured": true, 00:13:31.645 "data_offset": 2048, 00:13:31.645 "data_size": 63488 00:13:31.645 } 00:13:31.645 ] 00:13:31.645 }' 00:13:31.645 10:24:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.645 10:24:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:31.645 10:24:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.645 10:24:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:31.645 10:24:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:32.215 [2024-11-19 10:24:45.874068] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:32.215 [2024-11-19 10:24:45.874180] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:32.215 [2024-11-19 10:24:45.874306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.803 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:32.803 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:32.803 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.803 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:32.803 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:32.803 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.803 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.803 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.803 10:24:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.803 10:24:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.803 10:24:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.803 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:32.803 "name": "raid_bdev1", 00:13:32.803 "uuid": "08b365bd-b440-4081-b35f-6e05f8605fc7", 00:13:32.803 "strip_size_kb": 0, 00:13:32.803 "state": "online", 00:13:32.803 "raid_level": "raid1", 00:13:32.803 "superblock": true, 00:13:32.804 "num_base_bdevs": 4, 00:13:32.804 "num_base_bdevs_discovered": 3, 00:13:32.804 "num_base_bdevs_operational": 3, 00:13:32.804 "base_bdevs_list": [ 00:13:32.804 { 00:13:32.804 "name": "spare", 00:13:32.804 "uuid": "218216d2-6623-57f3-8a98-8f7a93aeada5", 00:13:32.804 "is_configured": true, 00:13:32.804 "data_offset": 2048, 00:13:32.804 "data_size": 63488 00:13:32.804 }, 00:13:32.804 { 00:13:32.804 "name": null, 00:13:32.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.804 "is_configured": false, 00:13:32.804 "data_offset": 0, 00:13:32.804 "data_size": 63488 00:13:32.804 }, 00:13:32.804 { 00:13:32.804 "name": "BaseBdev3", 00:13:32.804 "uuid": "cadc993c-2ee1-577a-9674-d585f1149304", 00:13:32.804 "is_configured": true, 00:13:32.804 "data_offset": 2048, 00:13:32.804 "data_size": 63488 00:13:32.804 }, 00:13:32.804 { 00:13:32.804 "name": "BaseBdev4", 00:13:32.804 "uuid": "18050023-21e5-59fe-94a7-f11cbdea9e69", 00:13:32.804 "is_configured": true, 00:13:32.804 "data_offset": 2048, 00:13:32.804 "data_size": 63488 00:13:32.804 } 00:13:32.804 ] 00:13:32.804 }' 00:13:32.804 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.804 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:32.804 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.804 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:32.804 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:32.804 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:32.804 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.804 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:32.804 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:32.804 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.804 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.804 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.804 10:24:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.804 10:24:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.081 10:24:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.081 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.081 "name": "raid_bdev1", 00:13:33.081 "uuid": "08b365bd-b440-4081-b35f-6e05f8605fc7", 00:13:33.081 "strip_size_kb": 0, 00:13:33.081 "state": "online", 00:13:33.081 "raid_level": "raid1", 00:13:33.081 "superblock": true, 00:13:33.081 "num_base_bdevs": 4, 00:13:33.081 "num_base_bdevs_discovered": 3, 00:13:33.081 "num_base_bdevs_operational": 3, 00:13:33.081 "base_bdevs_list": [ 00:13:33.081 { 00:13:33.082 "name": "spare", 00:13:33.082 "uuid": "218216d2-6623-57f3-8a98-8f7a93aeada5", 00:13:33.082 "is_configured": true, 00:13:33.082 "data_offset": 2048, 00:13:33.082 "data_size": 63488 00:13:33.082 }, 00:13:33.082 { 00:13:33.082 "name": null, 00:13:33.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.082 "is_configured": false, 00:13:33.082 "data_offset": 0, 00:13:33.082 "data_size": 63488 00:13:33.082 }, 00:13:33.082 { 00:13:33.082 "name": "BaseBdev3", 00:13:33.082 "uuid": "cadc993c-2ee1-577a-9674-d585f1149304", 00:13:33.082 "is_configured": true, 00:13:33.082 "data_offset": 2048, 00:13:33.082 "data_size": 63488 00:13:33.082 }, 00:13:33.082 { 00:13:33.082 "name": "BaseBdev4", 00:13:33.082 "uuid": "18050023-21e5-59fe-94a7-f11cbdea9e69", 00:13:33.082 "is_configured": true, 00:13:33.082 "data_offset": 2048, 00:13:33.082 "data_size": 63488 00:13:33.082 } 00:13:33.082 ] 00:13:33.082 }' 00:13:33.082 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.082 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:33.082 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.082 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:33.082 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:33.082 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.082 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.082 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.082 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.082 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:33.082 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.082 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.082 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.082 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.082 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.082 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.082 10:24:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.082 10:24:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.082 10:24:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.082 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.082 "name": "raid_bdev1", 00:13:33.082 "uuid": "08b365bd-b440-4081-b35f-6e05f8605fc7", 00:13:33.082 "strip_size_kb": 0, 00:13:33.082 "state": "online", 00:13:33.082 "raid_level": "raid1", 00:13:33.082 "superblock": true, 00:13:33.082 "num_base_bdevs": 4, 00:13:33.082 "num_base_bdevs_discovered": 3, 00:13:33.082 "num_base_bdevs_operational": 3, 00:13:33.082 "base_bdevs_list": [ 00:13:33.082 { 00:13:33.082 "name": "spare", 00:13:33.082 "uuid": "218216d2-6623-57f3-8a98-8f7a93aeada5", 00:13:33.082 "is_configured": true, 00:13:33.082 "data_offset": 2048, 00:13:33.082 "data_size": 63488 00:13:33.082 }, 00:13:33.082 { 00:13:33.082 "name": null, 00:13:33.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.082 "is_configured": false, 00:13:33.082 "data_offset": 0, 00:13:33.082 "data_size": 63488 00:13:33.082 }, 00:13:33.082 { 00:13:33.082 "name": "BaseBdev3", 00:13:33.082 "uuid": "cadc993c-2ee1-577a-9674-d585f1149304", 00:13:33.082 "is_configured": true, 00:13:33.082 "data_offset": 2048, 00:13:33.082 "data_size": 63488 00:13:33.082 }, 00:13:33.082 { 00:13:33.082 "name": "BaseBdev4", 00:13:33.082 "uuid": "18050023-21e5-59fe-94a7-f11cbdea9e69", 00:13:33.082 "is_configured": true, 00:13:33.082 "data_offset": 2048, 00:13:33.082 "data_size": 63488 00:13:33.082 } 00:13:33.082 ] 00:13:33.082 }' 00:13:33.082 10:24:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.082 10:24:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.342 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:33.342 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.342 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.602 [2024-11-19 10:24:47.124923] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:33.602 [2024-11-19 10:24:47.125005] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:33.602 [2024-11-19 10:24:47.125113] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:33.602 [2024-11-19 10:24:47.125220] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:33.602 [2024-11-19 10:24:47.125264] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:33.602 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.602 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.602 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.602 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.602 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:33.602 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.602 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:33.602 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:33.602 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:33.602 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:33.602 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:33.602 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:33.602 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:33.602 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:33.602 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:33.602 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:33.602 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:33.602 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:33.602 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:33.861 /dev/nbd0 00:13:33.861 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:33.861 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:33.861 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:33.861 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:33.861 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:33.861 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:33.861 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:33.861 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:33.861 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:33.861 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:33.861 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.861 1+0 records in 00:13:33.861 1+0 records out 00:13:33.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000307742 s, 13.3 MB/s 00:13:33.861 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.861 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:33.861 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.861 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:33.861 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:33.861 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:33.861 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:33.861 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:33.861 /dev/nbd1 00:13:33.861 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:33.861 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:33.861 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:33.861 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:33.861 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:33.861 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:33.861 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:34.121 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:34.121 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:34.121 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:34.121 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.121 1+0 records in 00:13:34.121 1+0 records out 00:13:34.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228244 s, 17.9 MB/s 00:13:34.121 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.121 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:34.121 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.121 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:34.121 10:24:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:34.121 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:34.121 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:34.121 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:34.121 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:34.121 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:34.121 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:34.121 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:34.121 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:34.121 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:34.121 10:24:47 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:34.381 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:34.381 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:34.381 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:34.381 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:34.381 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:34.381 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:34.381 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:34.381 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:34.381 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:34.381 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.642 [2024-11-19 10:24:48.248102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:34.642 [2024-11-19 10:24:48.248154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.642 [2024-11-19 10:24:48.248176] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:34.642 [2024-11-19 10:24:48.248185] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.642 [2024-11-19 10:24:48.250252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.642 [2024-11-19 10:24:48.250342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:34.642 [2024-11-19 10:24:48.250442] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:34.642 [2024-11-19 10:24:48.250510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:34.642 [2024-11-19 10:24:48.250652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:34.642 [2024-11-19 10:24:48.250739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:34.642 spare 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.642 [2024-11-19 10:24:48.350626] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:34.642 [2024-11-19 10:24:48.350649] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:34.642 [2024-11-19 10:24:48.350924] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:34.642 [2024-11-19 10:24:48.351102] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:34.642 [2024-11-19 10:24:48.351117] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:34.642 [2024-11-19 10:24:48.351273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.642 "name": "raid_bdev1", 00:13:34.642 "uuid": "08b365bd-b440-4081-b35f-6e05f8605fc7", 00:13:34.642 "strip_size_kb": 0, 00:13:34.642 "state": "online", 00:13:34.642 "raid_level": "raid1", 00:13:34.642 "superblock": true, 00:13:34.642 "num_base_bdevs": 4, 00:13:34.642 "num_base_bdevs_discovered": 3, 00:13:34.642 "num_base_bdevs_operational": 3, 00:13:34.642 "base_bdevs_list": [ 00:13:34.642 { 00:13:34.642 "name": "spare", 00:13:34.642 "uuid": "218216d2-6623-57f3-8a98-8f7a93aeada5", 00:13:34.642 "is_configured": true, 00:13:34.642 "data_offset": 2048, 00:13:34.642 "data_size": 63488 00:13:34.642 }, 00:13:34.642 { 00:13:34.642 "name": null, 00:13:34.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.642 "is_configured": false, 00:13:34.642 "data_offset": 2048, 00:13:34.642 "data_size": 63488 00:13:34.642 }, 00:13:34.642 { 00:13:34.642 "name": "BaseBdev3", 00:13:34.642 "uuid": "cadc993c-2ee1-577a-9674-d585f1149304", 00:13:34.642 "is_configured": true, 00:13:34.642 "data_offset": 2048, 00:13:34.642 "data_size": 63488 00:13:34.642 }, 00:13:34.642 { 00:13:34.642 "name": "BaseBdev4", 00:13:34.642 "uuid": "18050023-21e5-59fe-94a7-f11cbdea9e69", 00:13:34.642 "is_configured": true, 00:13:34.642 "data_offset": 2048, 00:13:34.642 "data_size": 63488 00:13:34.642 } 00:13:34.642 ] 00:13:34.642 }' 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.642 10:24:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.211 "name": "raid_bdev1", 00:13:35.211 "uuid": "08b365bd-b440-4081-b35f-6e05f8605fc7", 00:13:35.211 "strip_size_kb": 0, 00:13:35.211 "state": "online", 00:13:35.211 "raid_level": "raid1", 00:13:35.211 "superblock": true, 00:13:35.211 "num_base_bdevs": 4, 00:13:35.211 "num_base_bdevs_discovered": 3, 00:13:35.211 "num_base_bdevs_operational": 3, 00:13:35.211 "base_bdevs_list": [ 00:13:35.211 { 00:13:35.211 "name": "spare", 00:13:35.211 "uuid": "218216d2-6623-57f3-8a98-8f7a93aeada5", 00:13:35.211 "is_configured": true, 00:13:35.211 "data_offset": 2048, 00:13:35.211 "data_size": 63488 00:13:35.211 }, 00:13:35.211 { 00:13:35.211 "name": null, 00:13:35.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.211 "is_configured": false, 00:13:35.211 "data_offset": 2048, 00:13:35.211 "data_size": 63488 00:13:35.211 }, 00:13:35.211 { 00:13:35.211 "name": "BaseBdev3", 00:13:35.211 "uuid": "cadc993c-2ee1-577a-9674-d585f1149304", 00:13:35.211 "is_configured": true, 00:13:35.211 "data_offset": 2048, 00:13:35.211 "data_size": 63488 00:13:35.211 }, 00:13:35.211 { 00:13:35.211 "name": "BaseBdev4", 00:13:35.211 "uuid": "18050023-21e5-59fe-94a7-f11cbdea9e69", 00:13:35.211 "is_configured": true, 00:13:35.211 "data_offset": 2048, 00:13:35.211 "data_size": 63488 00:13:35.211 } 00:13:35.211 ] 00:13:35.211 }' 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.211 [2024-11-19 10:24:48.978932] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.211 10:24:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.470 10:24:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.470 10:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.470 "name": "raid_bdev1", 00:13:35.470 "uuid": "08b365bd-b440-4081-b35f-6e05f8605fc7", 00:13:35.470 "strip_size_kb": 0, 00:13:35.470 "state": "online", 00:13:35.470 "raid_level": "raid1", 00:13:35.470 "superblock": true, 00:13:35.470 "num_base_bdevs": 4, 00:13:35.470 "num_base_bdevs_discovered": 2, 00:13:35.470 "num_base_bdevs_operational": 2, 00:13:35.470 "base_bdevs_list": [ 00:13:35.470 { 00:13:35.470 "name": null, 00:13:35.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.470 "is_configured": false, 00:13:35.470 "data_offset": 0, 00:13:35.470 "data_size": 63488 00:13:35.470 }, 00:13:35.470 { 00:13:35.470 "name": null, 00:13:35.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.470 "is_configured": false, 00:13:35.470 "data_offset": 2048, 00:13:35.470 "data_size": 63488 00:13:35.470 }, 00:13:35.470 { 00:13:35.470 "name": "BaseBdev3", 00:13:35.470 "uuid": "cadc993c-2ee1-577a-9674-d585f1149304", 00:13:35.470 "is_configured": true, 00:13:35.470 "data_offset": 2048, 00:13:35.470 "data_size": 63488 00:13:35.470 }, 00:13:35.470 { 00:13:35.470 "name": "BaseBdev4", 00:13:35.470 "uuid": "18050023-21e5-59fe-94a7-f11cbdea9e69", 00:13:35.470 "is_configured": true, 00:13:35.470 "data_offset": 2048, 00:13:35.470 "data_size": 63488 00:13:35.470 } 00:13:35.470 ] 00:13:35.470 }' 00:13:35.470 10:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.470 10:24:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.730 10:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:35.730 10:24:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.730 10:24:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.730 [2024-11-19 10:24:49.422176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:35.730 [2024-11-19 10:24:49.422350] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:35.730 [2024-11-19 10:24:49.422363] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:35.730 [2024-11-19 10:24:49.422403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:35.730 [2024-11-19 10:24:49.436142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:13:35.730 10:24:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.730 10:24:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:35.730 [2024-11-19 10:24:49.437890] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:36.669 10:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.669 10:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.669 10:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.669 10:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.669 10:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.929 10:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.929 10:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.929 10:24:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.929 10:24:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.929 10:24:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.929 10:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.929 "name": "raid_bdev1", 00:13:36.929 "uuid": "08b365bd-b440-4081-b35f-6e05f8605fc7", 00:13:36.930 "strip_size_kb": 0, 00:13:36.930 "state": "online", 00:13:36.930 "raid_level": "raid1", 00:13:36.930 "superblock": true, 00:13:36.930 "num_base_bdevs": 4, 00:13:36.930 "num_base_bdevs_discovered": 3, 00:13:36.930 "num_base_bdevs_operational": 3, 00:13:36.930 "process": { 00:13:36.930 "type": "rebuild", 00:13:36.930 "target": "spare", 00:13:36.930 "progress": { 00:13:36.930 "blocks": 20480, 00:13:36.930 "percent": 32 00:13:36.930 } 00:13:36.930 }, 00:13:36.930 "base_bdevs_list": [ 00:13:36.930 { 00:13:36.930 "name": "spare", 00:13:36.930 "uuid": "218216d2-6623-57f3-8a98-8f7a93aeada5", 00:13:36.930 "is_configured": true, 00:13:36.930 "data_offset": 2048, 00:13:36.930 "data_size": 63488 00:13:36.930 }, 00:13:36.930 { 00:13:36.930 "name": null, 00:13:36.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.930 "is_configured": false, 00:13:36.930 "data_offset": 2048, 00:13:36.930 "data_size": 63488 00:13:36.930 }, 00:13:36.930 { 00:13:36.930 "name": "BaseBdev3", 00:13:36.930 "uuid": "cadc993c-2ee1-577a-9674-d585f1149304", 00:13:36.930 "is_configured": true, 00:13:36.930 "data_offset": 2048, 00:13:36.930 "data_size": 63488 00:13:36.930 }, 00:13:36.930 { 00:13:36.930 "name": "BaseBdev4", 00:13:36.930 "uuid": "18050023-21e5-59fe-94a7-f11cbdea9e69", 00:13:36.930 "is_configured": true, 00:13:36.930 "data_offset": 2048, 00:13:36.930 "data_size": 63488 00:13:36.930 } 00:13:36.930 ] 00:13:36.930 }' 00:13:36.930 10:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.930 10:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.930 10:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.930 10:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.930 10:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:36.930 10:24:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.930 10:24:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.930 [2024-11-19 10:24:50.593623] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:36.930 [2024-11-19 10:24:50.642504] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:36.930 [2024-11-19 10:24:50.642558] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.930 [2024-11-19 10:24:50.642576] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:36.930 [2024-11-19 10:24:50.642583] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:36.930 10:24:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.930 10:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:36.930 10:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.930 10:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.930 10:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.930 10:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.930 10:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:36.930 10:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.930 10:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.930 10:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.930 10:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.930 10:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.930 10:24:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.930 10:24:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.930 10:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.930 10:24:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.190 10:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.190 "name": "raid_bdev1", 00:13:37.190 "uuid": "08b365bd-b440-4081-b35f-6e05f8605fc7", 00:13:37.190 "strip_size_kb": 0, 00:13:37.190 "state": "online", 00:13:37.190 "raid_level": "raid1", 00:13:37.190 "superblock": true, 00:13:37.190 "num_base_bdevs": 4, 00:13:37.190 "num_base_bdevs_discovered": 2, 00:13:37.190 "num_base_bdevs_operational": 2, 00:13:37.190 "base_bdevs_list": [ 00:13:37.190 { 00:13:37.190 "name": null, 00:13:37.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.190 "is_configured": false, 00:13:37.190 "data_offset": 0, 00:13:37.190 "data_size": 63488 00:13:37.190 }, 00:13:37.190 { 00:13:37.190 "name": null, 00:13:37.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.190 "is_configured": false, 00:13:37.190 "data_offset": 2048, 00:13:37.190 "data_size": 63488 00:13:37.190 }, 00:13:37.190 { 00:13:37.190 "name": "BaseBdev3", 00:13:37.190 "uuid": "cadc993c-2ee1-577a-9674-d585f1149304", 00:13:37.190 "is_configured": true, 00:13:37.190 "data_offset": 2048, 00:13:37.190 "data_size": 63488 00:13:37.190 }, 00:13:37.190 { 00:13:37.190 "name": "BaseBdev4", 00:13:37.190 "uuid": "18050023-21e5-59fe-94a7-f11cbdea9e69", 00:13:37.190 "is_configured": true, 00:13:37.190 "data_offset": 2048, 00:13:37.190 "data_size": 63488 00:13:37.190 } 00:13:37.190 ] 00:13:37.190 }' 00:13:37.190 10:24:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.190 10:24:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.451 10:24:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:37.451 10:24:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.451 10:24:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.451 [2024-11-19 10:24:51.090974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:37.451 [2024-11-19 10:24:51.091057] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.451 [2024-11-19 10:24:51.091083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:13:37.451 [2024-11-19 10:24:51.091093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.451 [2024-11-19 10:24:51.091552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.451 [2024-11-19 10:24:51.091571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:37.451 [2024-11-19 10:24:51.091666] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:37.451 [2024-11-19 10:24:51.091677] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:37.451 [2024-11-19 10:24:51.091691] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:37.451 [2024-11-19 10:24:51.091719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:37.451 [2024-11-19 10:24:51.105625] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:13:37.451 spare 00:13:37.451 10:24:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.451 10:24:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:37.451 [2024-11-19 10:24:51.107475] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:38.391 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:38.391 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.391 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:38.391 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:38.391 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.391 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.391 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.391 10:24:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.391 10:24:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.391 10:24:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.391 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.391 "name": "raid_bdev1", 00:13:38.391 "uuid": "08b365bd-b440-4081-b35f-6e05f8605fc7", 00:13:38.391 "strip_size_kb": 0, 00:13:38.391 "state": "online", 00:13:38.391 "raid_level": "raid1", 00:13:38.391 "superblock": true, 00:13:38.391 "num_base_bdevs": 4, 00:13:38.391 "num_base_bdevs_discovered": 3, 00:13:38.391 "num_base_bdevs_operational": 3, 00:13:38.391 "process": { 00:13:38.391 "type": "rebuild", 00:13:38.391 "target": "spare", 00:13:38.391 "progress": { 00:13:38.391 "blocks": 20480, 00:13:38.391 "percent": 32 00:13:38.391 } 00:13:38.391 }, 00:13:38.391 "base_bdevs_list": [ 00:13:38.391 { 00:13:38.391 "name": "spare", 00:13:38.391 "uuid": "218216d2-6623-57f3-8a98-8f7a93aeada5", 00:13:38.391 "is_configured": true, 00:13:38.391 "data_offset": 2048, 00:13:38.391 "data_size": 63488 00:13:38.391 }, 00:13:38.391 { 00:13:38.391 "name": null, 00:13:38.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.391 "is_configured": false, 00:13:38.391 "data_offset": 2048, 00:13:38.391 "data_size": 63488 00:13:38.391 }, 00:13:38.391 { 00:13:38.391 "name": "BaseBdev3", 00:13:38.391 "uuid": "cadc993c-2ee1-577a-9674-d585f1149304", 00:13:38.391 "is_configured": true, 00:13:38.391 "data_offset": 2048, 00:13:38.391 "data_size": 63488 00:13:38.391 }, 00:13:38.391 { 00:13:38.391 "name": "BaseBdev4", 00:13:38.391 "uuid": "18050023-21e5-59fe-94a7-f11cbdea9e69", 00:13:38.391 "is_configured": true, 00:13:38.391 "data_offset": 2048, 00:13:38.391 "data_size": 63488 00:13:38.391 } 00:13:38.391 ] 00:13:38.391 }' 00:13:38.391 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.652 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:38.652 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.652 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:38.652 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:38.652 10:24:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.652 10:24:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.652 [2024-11-19 10:24:52.267706] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:38.652 [2024-11-19 10:24:52.312115] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:38.652 [2024-11-19 10:24:52.312171] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.652 [2024-11-19 10:24:52.312187] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:38.652 [2024-11-19 10:24:52.312195] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:38.652 10:24:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.652 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:38.652 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.652 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.652 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.652 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.652 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:38.652 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.652 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.652 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.652 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.652 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.652 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.652 10:24:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.652 10:24:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.652 10:24:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.652 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.652 "name": "raid_bdev1", 00:13:38.652 "uuid": "08b365bd-b440-4081-b35f-6e05f8605fc7", 00:13:38.652 "strip_size_kb": 0, 00:13:38.652 "state": "online", 00:13:38.652 "raid_level": "raid1", 00:13:38.652 "superblock": true, 00:13:38.652 "num_base_bdevs": 4, 00:13:38.652 "num_base_bdevs_discovered": 2, 00:13:38.652 "num_base_bdevs_operational": 2, 00:13:38.652 "base_bdevs_list": [ 00:13:38.652 { 00:13:38.652 "name": null, 00:13:38.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.652 "is_configured": false, 00:13:38.652 "data_offset": 0, 00:13:38.652 "data_size": 63488 00:13:38.652 }, 00:13:38.652 { 00:13:38.652 "name": null, 00:13:38.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.652 "is_configured": false, 00:13:38.652 "data_offset": 2048, 00:13:38.652 "data_size": 63488 00:13:38.652 }, 00:13:38.652 { 00:13:38.652 "name": "BaseBdev3", 00:13:38.652 "uuid": "cadc993c-2ee1-577a-9674-d585f1149304", 00:13:38.652 "is_configured": true, 00:13:38.652 "data_offset": 2048, 00:13:38.652 "data_size": 63488 00:13:38.652 }, 00:13:38.652 { 00:13:38.652 "name": "BaseBdev4", 00:13:38.652 "uuid": "18050023-21e5-59fe-94a7-f11cbdea9e69", 00:13:38.652 "is_configured": true, 00:13:38.652 "data_offset": 2048, 00:13:38.652 "data_size": 63488 00:13:38.652 } 00:13:38.652 ] 00:13:38.652 }' 00:13:38.652 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.652 10:24:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.223 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:39.223 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.223 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:39.223 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:39.223 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.223 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.223 10:24:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.223 10:24:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.223 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.223 10:24:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.223 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.223 "name": "raid_bdev1", 00:13:39.223 "uuid": "08b365bd-b440-4081-b35f-6e05f8605fc7", 00:13:39.223 "strip_size_kb": 0, 00:13:39.223 "state": "online", 00:13:39.223 "raid_level": "raid1", 00:13:39.223 "superblock": true, 00:13:39.223 "num_base_bdevs": 4, 00:13:39.223 "num_base_bdevs_discovered": 2, 00:13:39.223 "num_base_bdevs_operational": 2, 00:13:39.223 "base_bdevs_list": [ 00:13:39.223 { 00:13:39.223 "name": null, 00:13:39.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.223 "is_configured": false, 00:13:39.223 "data_offset": 0, 00:13:39.223 "data_size": 63488 00:13:39.223 }, 00:13:39.223 { 00:13:39.223 "name": null, 00:13:39.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.223 "is_configured": false, 00:13:39.223 "data_offset": 2048, 00:13:39.223 "data_size": 63488 00:13:39.223 }, 00:13:39.223 { 00:13:39.223 "name": "BaseBdev3", 00:13:39.223 "uuid": "cadc993c-2ee1-577a-9674-d585f1149304", 00:13:39.223 "is_configured": true, 00:13:39.223 "data_offset": 2048, 00:13:39.223 "data_size": 63488 00:13:39.223 }, 00:13:39.223 { 00:13:39.223 "name": "BaseBdev4", 00:13:39.223 "uuid": "18050023-21e5-59fe-94a7-f11cbdea9e69", 00:13:39.223 "is_configured": true, 00:13:39.223 "data_offset": 2048, 00:13:39.223 "data_size": 63488 00:13:39.223 } 00:13:39.223 ] 00:13:39.223 }' 00:13:39.223 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.223 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:39.223 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.223 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:39.223 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:39.223 10:24:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.223 10:24:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.223 10:24:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.223 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:39.223 10:24:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.223 10:24:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.223 [2024-11-19 10:24:52.956498] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:39.223 [2024-11-19 10:24:52.956569] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.223 [2024-11-19 10:24:52.956587] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:13:39.223 [2024-11-19 10:24:52.956598] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.223 [2024-11-19 10:24:52.957025] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.223 [2024-11-19 10:24:52.957046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:39.223 [2024-11-19 10:24:52.957122] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:39.224 [2024-11-19 10:24:52.957136] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:39.224 [2024-11-19 10:24:52.957144] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:39.224 [2024-11-19 10:24:52.957169] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:39.224 BaseBdev1 00:13:39.224 10:24:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.224 10:24:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:40.605 10:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:40.605 10:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.605 10:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.605 10:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.605 10:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.605 10:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:40.605 10:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.605 10:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.605 10:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.605 10:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.605 10:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.605 10:24:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.605 10:24:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.605 10:24:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.605 10:24:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.605 10:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.605 "name": "raid_bdev1", 00:13:40.605 "uuid": "08b365bd-b440-4081-b35f-6e05f8605fc7", 00:13:40.605 "strip_size_kb": 0, 00:13:40.605 "state": "online", 00:13:40.605 "raid_level": "raid1", 00:13:40.605 "superblock": true, 00:13:40.605 "num_base_bdevs": 4, 00:13:40.605 "num_base_bdevs_discovered": 2, 00:13:40.605 "num_base_bdevs_operational": 2, 00:13:40.605 "base_bdevs_list": [ 00:13:40.605 { 00:13:40.605 "name": null, 00:13:40.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.605 "is_configured": false, 00:13:40.605 "data_offset": 0, 00:13:40.605 "data_size": 63488 00:13:40.605 }, 00:13:40.605 { 00:13:40.605 "name": null, 00:13:40.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.605 "is_configured": false, 00:13:40.605 "data_offset": 2048, 00:13:40.605 "data_size": 63488 00:13:40.605 }, 00:13:40.605 { 00:13:40.605 "name": "BaseBdev3", 00:13:40.605 "uuid": "cadc993c-2ee1-577a-9674-d585f1149304", 00:13:40.605 "is_configured": true, 00:13:40.605 "data_offset": 2048, 00:13:40.605 "data_size": 63488 00:13:40.605 }, 00:13:40.605 { 00:13:40.605 "name": "BaseBdev4", 00:13:40.605 "uuid": "18050023-21e5-59fe-94a7-f11cbdea9e69", 00:13:40.605 "is_configured": true, 00:13:40.605 "data_offset": 2048, 00:13:40.605 "data_size": 63488 00:13:40.605 } 00:13:40.605 ] 00:13:40.605 }' 00:13:40.605 10:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.605 10:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.865 10:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:40.865 10:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.865 10:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:40.865 10:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:40.865 10:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.865 10:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.865 10:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.865 10:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.865 10:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.865 10:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.865 10:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.865 "name": "raid_bdev1", 00:13:40.865 "uuid": "08b365bd-b440-4081-b35f-6e05f8605fc7", 00:13:40.865 "strip_size_kb": 0, 00:13:40.865 "state": "online", 00:13:40.865 "raid_level": "raid1", 00:13:40.865 "superblock": true, 00:13:40.865 "num_base_bdevs": 4, 00:13:40.865 "num_base_bdevs_discovered": 2, 00:13:40.865 "num_base_bdevs_operational": 2, 00:13:40.865 "base_bdevs_list": [ 00:13:40.865 { 00:13:40.865 "name": null, 00:13:40.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.865 "is_configured": false, 00:13:40.865 "data_offset": 0, 00:13:40.865 "data_size": 63488 00:13:40.865 }, 00:13:40.865 { 00:13:40.865 "name": null, 00:13:40.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.865 "is_configured": false, 00:13:40.865 "data_offset": 2048, 00:13:40.865 "data_size": 63488 00:13:40.865 }, 00:13:40.865 { 00:13:40.865 "name": "BaseBdev3", 00:13:40.865 "uuid": "cadc993c-2ee1-577a-9674-d585f1149304", 00:13:40.865 "is_configured": true, 00:13:40.865 "data_offset": 2048, 00:13:40.865 "data_size": 63488 00:13:40.865 }, 00:13:40.865 { 00:13:40.865 "name": "BaseBdev4", 00:13:40.865 "uuid": "18050023-21e5-59fe-94a7-f11cbdea9e69", 00:13:40.865 "is_configured": true, 00:13:40.865 "data_offset": 2048, 00:13:40.865 "data_size": 63488 00:13:40.865 } 00:13:40.865 ] 00:13:40.865 }' 00:13:40.865 10:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.865 10:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:40.866 10:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.866 10:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:40.866 10:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:40.866 10:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:40.866 10:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:40.866 10:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:40.866 10:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:40.866 10:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:40.866 10:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:40.866 10:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:40.866 10:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.866 10:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.866 [2024-11-19 10:24:54.550093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:40.866 [2024-11-19 10:24:54.550272] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:40.866 [2024-11-19 10:24:54.550294] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:40.866 request: 00:13:40.866 { 00:13:40.866 "base_bdev": "BaseBdev1", 00:13:40.866 "raid_bdev": "raid_bdev1", 00:13:40.866 "method": "bdev_raid_add_base_bdev", 00:13:40.866 "req_id": 1 00:13:40.866 } 00:13:40.866 Got JSON-RPC error response 00:13:40.866 response: 00:13:40.866 { 00:13:40.866 "code": -22, 00:13:40.866 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:40.866 } 00:13:40.866 10:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:40.866 10:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:40.866 10:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:40.866 10:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:40.866 10:24:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:40.866 10:24:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:41.805 10:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:41.805 10:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.805 10:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.805 10:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.805 10:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.805 10:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:41.805 10:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.805 10:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.805 10:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.805 10:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.805 10:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.805 10:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.805 10:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.805 10:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.064 10:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.064 10:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.064 "name": "raid_bdev1", 00:13:42.064 "uuid": "08b365bd-b440-4081-b35f-6e05f8605fc7", 00:13:42.064 "strip_size_kb": 0, 00:13:42.064 "state": "online", 00:13:42.064 "raid_level": "raid1", 00:13:42.064 "superblock": true, 00:13:42.064 "num_base_bdevs": 4, 00:13:42.064 "num_base_bdevs_discovered": 2, 00:13:42.064 "num_base_bdevs_operational": 2, 00:13:42.064 "base_bdevs_list": [ 00:13:42.064 { 00:13:42.064 "name": null, 00:13:42.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.064 "is_configured": false, 00:13:42.064 "data_offset": 0, 00:13:42.064 "data_size": 63488 00:13:42.064 }, 00:13:42.064 { 00:13:42.064 "name": null, 00:13:42.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.064 "is_configured": false, 00:13:42.064 "data_offset": 2048, 00:13:42.064 "data_size": 63488 00:13:42.064 }, 00:13:42.064 { 00:13:42.064 "name": "BaseBdev3", 00:13:42.064 "uuid": "cadc993c-2ee1-577a-9674-d585f1149304", 00:13:42.064 "is_configured": true, 00:13:42.064 "data_offset": 2048, 00:13:42.064 "data_size": 63488 00:13:42.064 }, 00:13:42.064 { 00:13:42.064 "name": "BaseBdev4", 00:13:42.064 "uuid": "18050023-21e5-59fe-94a7-f11cbdea9e69", 00:13:42.064 "is_configured": true, 00:13:42.064 "data_offset": 2048, 00:13:42.064 "data_size": 63488 00:13:42.064 } 00:13:42.064 ] 00:13:42.064 }' 00:13:42.064 10:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.064 10:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.324 10:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:42.324 10:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.324 10:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:42.324 10:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:42.324 10:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.324 10:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.324 10:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.324 10:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.324 10:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.324 10:24:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.324 10:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.324 "name": "raid_bdev1", 00:13:42.324 "uuid": "08b365bd-b440-4081-b35f-6e05f8605fc7", 00:13:42.324 "strip_size_kb": 0, 00:13:42.324 "state": "online", 00:13:42.324 "raid_level": "raid1", 00:13:42.324 "superblock": true, 00:13:42.324 "num_base_bdevs": 4, 00:13:42.324 "num_base_bdevs_discovered": 2, 00:13:42.324 "num_base_bdevs_operational": 2, 00:13:42.324 "base_bdevs_list": [ 00:13:42.324 { 00:13:42.324 "name": null, 00:13:42.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.324 "is_configured": false, 00:13:42.324 "data_offset": 0, 00:13:42.324 "data_size": 63488 00:13:42.324 }, 00:13:42.324 { 00:13:42.324 "name": null, 00:13:42.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.324 "is_configured": false, 00:13:42.324 "data_offset": 2048, 00:13:42.324 "data_size": 63488 00:13:42.324 }, 00:13:42.324 { 00:13:42.324 "name": "BaseBdev3", 00:13:42.324 "uuid": "cadc993c-2ee1-577a-9674-d585f1149304", 00:13:42.324 "is_configured": true, 00:13:42.324 "data_offset": 2048, 00:13:42.324 "data_size": 63488 00:13:42.324 }, 00:13:42.324 { 00:13:42.324 "name": "BaseBdev4", 00:13:42.324 "uuid": "18050023-21e5-59fe-94a7-f11cbdea9e69", 00:13:42.324 "is_configured": true, 00:13:42.324 "data_offset": 2048, 00:13:42.324 "data_size": 63488 00:13:42.324 } 00:13:42.324 ] 00:13:42.324 }' 00:13:42.324 10:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.324 10:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:42.324 10:24:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.324 10:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:42.324 10:24:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 77718 00:13:42.324 10:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77718 ']' 00:13:42.324 10:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 77718 00:13:42.324 10:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:42.324 10:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:42.324 10:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77718 00:13:42.324 10:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:42.324 10:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:42.324 10:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77718' 00:13:42.324 killing process with pid 77718 00:13:42.324 10:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 77718 00:13:42.324 Received shutdown signal, test time was about 60.000000 seconds 00:13:42.324 00:13:42.324 Latency(us) 00:13:42.324 [2024-11-19T10:24:56.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.324 [2024-11-19T10:24:56.105Z] =================================================================================================================== 00:13:42.324 [2024-11-19T10:24:56.105Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:42.324 [2024-11-19 10:24:56.078202] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:42.324 10:24:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 77718 00:13:42.324 [2024-11-19 10:24:56.078317] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:42.324 [2024-11-19 10:24:56.078381] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:42.324 [2024-11-19 10:24:56.078389] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:42.894 [2024-11-19 10:24:56.557000] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:43.833 10:24:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:43.833 00:13:43.833 real 0m25.012s 00:13:43.833 user 0m29.786s 00:13:43.833 sys 0m3.829s 00:13:43.833 ************************************ 00:13:43.833 END TEST raid_rebuild_test_sb 00:13:43.833 10:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.833 10:24:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.833 ************************************ 00:13:44.093 10:24:57 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:13:44.093 10:24:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:44.093 10:24:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:44.093 10:24:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:44.093 ************************************ 00:13:44.093 START TEST raid_rebuild_test_io 00:13:44.093 ************************************ 00:13:44.093 10:24:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:13:44.093 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:44.093 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:44.093 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:44.093 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:44.093 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:44.093 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:44.093 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:44.093 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:44.093 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:44.093 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:44.094 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:44.094 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:44.094 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:44.094 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:44.094 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:44.094 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:44.094 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:44.094 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:44.094 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:44.094 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:44.094 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:44.094 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:44.094 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:44.094 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:44.094 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:44.094 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:44.094 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:44.094 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:44.094 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:44.094 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78473 00:13:44.094 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:44.094 10:24:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78473 00:13:44.094 10:24:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78473 ']' 00:13:44.094 10:24:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.094 10:24:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:44.094 10:24:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.094 10:24:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:44.094 10:24:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.094 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:44.094 Zero copy mechanism will not be used. 00:13:44.094 [2024-11-19 10:24:57.772475] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:13:44.094 [2024-11-19 10:24:57.772595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78473 ] 00:13:44.384 [2024-11-19 10:24:57.937992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:44.384 [2024-11-19 10:24:58.044467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.649 [2024-11-19 10:24:58.245891] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:44.649 [2024-11-19 10:24:58.245949] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:44.914 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:44.914 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:44.914 10:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:44.914 10:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:44.914 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.914 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.914 BaseBdev1_malloc 00:13:44.914 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.914 10:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:44.914 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.914 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.914 [2024-11-19 10:24:58.630471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:44.914 [2024-11-19 10:24:58.630552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.914 [2024-11-19 10:24:58.630575] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:44.914 [2024-11-19 10:24:58.630586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.914 [2024-11-19 10:24:58.632602] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.914 [2024-11-19 10:24:58.632693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:44.914 BaseBdev1 00:13:44.914 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.914 10:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:44.914 10:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:44.914 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.914 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.914 BaseBdev2_malloc 00:13:44.914 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.914 10:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:44.914 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.914 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.914 [2024-11-19 10:24:58.684128] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:44.914 [2024-11-19 10:24:58.684198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:44.914 [2024-11-19 10:24:58.684220] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:44.914 [2024-11-19 10:24:58.684233] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:44.914 [2024-11-19 10:24:58.686258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:44.914 [2024-11-19 10:24:58.686293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:44.914 BaseBdev2 00:13:44.915 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.915 10:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:44.915 10:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:44.915 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.915 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.175 BaseBdev3_malloc 00:13:45.175 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.175 10:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:45.175 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.175 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.175 [2024-11-19 10:24:58.768876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:45.175 [2024-11-19 10:24:58.768933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.175 [2024-11-19 10:24:58.768955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:45.175 [2024-11-19 10:24:58.768965] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.175 [2024-11-19 10:24:58.770909] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.175 [2024-11-19 10:24:58.770990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:45.175 BaseBdev3 00:13:45.175 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.175 10:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:45.175 10:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:45.175 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.175 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.175 BaseBdev4_malloc 00:13:45.175 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.175 10:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:45.175 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.175 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.175 [2024-11-19 10:24:58.822962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:45.175 [2024-11-19 10:24:58.823038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.175 [2024-11-19 10:24:58.823057] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:45.175 [2024-11-19 10:24:58.823067] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.175 [2024-11-19 10:24:58.825172] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.175 [2024-11-19 10:24:58.825212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:45.175 BaseBdev4 00:13:45.175 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.175 10:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:45.175 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.175 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.175 spare_malloc 00:13:45.175 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.175 10:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:45.175 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.175 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.175 spare_delay 00:13:45.175 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.175 10:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:45.175 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.176 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.176 [2024-11-19 10:24:58.890084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:45.176 [2024-11-19 10:24:58.890191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.176 [2024-11-19 10:24:58.890215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:45.176 [2024-11-19 10:24:58.890224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.176 [2024-11-19 10:24:58.892188] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.176 [2024-11-19 10:24:58.892240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:45.176 spare 00:13:45.176 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.176 10:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:45.176 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.176 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.176 [2024-11-19 10:24:58.902108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:45.176 [2024-11-19 10:24:58.903805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:45.176 [2024-11-19 10:24:58.903873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:45.176 [2024-11-19 10:24:58.903922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:45.176 [2024-11-19 10:24:58.904004] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:45.176 [2024-11-19 10:24:58.904018] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:45.176 [2024-11-19 10:24:58.904249] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:45.176 [2024-11-19 10:24:58.904414] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:45.176 [2024-11-19 10:24:58.904427] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:45.176 [2024-11-19 10:24:58.904573] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.176 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.176 10:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:45.176 10:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.176 10:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.176 10:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.176 10:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.176 10:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:45.176 10:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.176 10:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.176 10:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.176 10:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.176 10:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.176 10:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.176 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.176 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.176 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.438 10:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.438 "name": "raid_bdev1", 00:13:45.438 "uuid": "8414e4dd-b041-4ed3-a790-f08750a80ba1", 00:13:45.438 "strip_size_kb": 0, 00:13:45.438 "state": "online", 00:13:45.438 "raid_level": "raid1", 00:13:45.438 "superblock": false, 00:13:45.438 "num_base_bdevs": 4, 00:13:45.438 "num_base_bdevs_discovered": 4, 00:13:45.438 "num_base_bdevs_operational": 4, 00:13:45.438 "base_bdevs_list": [ 00:13:45.438 { 00:13:45.438 "name": "BaseBdev1", 00:13:45.438 "uuid": "033a4042-f8ac-553c-bba3-d9bf1c1eb9b3", 00:13:45.438 "is_configured": true, 00:13:45.438 "data_offset": 0, 00:13:45.438 "data_size": 65536 00:13:45.438 }, 00:13:45.438 { 00:13:45.438 "name": "BaseBdev2", 00:13:45.438 "uuid": "911bb9be-d67e-51db-914b-5199737bab0a", 00:13:45.438 "is_configured": true, 00:13:45.438 "data_offset": 0, 00:13:45.438 "data_size": 65536 00:13:45.438 }, 00:13:45.438 { 00:13:45.438 "name": "BaseBdev3", 00:13:45.438 "uuid": "353e9180-043e-52bd-8af3-16f3547ebd9f", 00:13:45.438 "is_configured": true, 00:13:45.438 "data_offset": 0, 00:13:45.438 "data_size": 65536 00:13:45.438 }, 00:13:45.438 { 00:13:45.438 "name": "BaseBdev4", 00:13:45.438 "uuid": "127a52e0-c2e0-5244-b759-c50e6e371c3e", 00:13:45.438 "is_configured": true, 00:13:45.438 "data_offset": 0, 00:13:45.438 "data_size": 65536 00:13:45.438 } 00:13:45.438 ] 00:13:45.438 }' 00:13:45.438 10:24:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.438 10:24:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:45.698 [2024-11-19 10:24:59.305690] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.698 [2024-11-19 10:24:59.381227] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.698 "name": "raid_bdev1", 00:13:45.698 "uuid": "8414e4dd-b041-4ed3-a790-f08750a80ba1", 00:13:45.698 "strip_size_kb": 0, 00:13:45.698 "state": "online", 00:13:45.698 "raid_level": "raid1", 00:13:45.698 "superblock": false, 00:13:45.698 "num_base_bdevs": 4, 00:13:45.698 "num_base_bdevs_discovered": 3, 00:13:45.698 "num_base_bdevs_operational": 3, 00:13:45.698 "base_bdevs_list": [ 00:13:45.698 { 00:13:45.698 "name": null, 00:13:45.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.698 "is_configured": false, 00:13:45.698 "data_offset": 0, 00:13:45.698 "data_size": 65536 00:13:45.698 }, 00:13:45.698 { 00:13:45.698 "name": "BaseBdev2", 00:13:45.698 "uuid": "911bb9be-d67e-51db-914b-5199737bab0a", 00:13:45.698 "is_configured": true, 00:13:45.698 "data_offset": 0, 00:13:45.698 "data_size": 65536 00:13:45.698 }, 00:13:45.698 { 00:13:45.698 "name": "BaseBdev3", 00:13:45.698 "uuid": "353e9180-043e-52bd-8af3-16f3547ebd9f", 00:13:45.698 "is_configured": true, 00:13:45.698 "data_offset": 0, 00:13:45.698 "data_size": 65536 00:13:45.698 }, 00:13:45.698 { 00:13:45.698 "name": "BaseBdev4", 00:13:45.698 "uuid": "127a52e0-c2e0-5244-b759-c50e6e371c3e", 00:13:45.698 "is_configured": true, 00:13:45.698 "data_offset": 0, 00:13:45.698 "data_size": 65536 00:13:45.698 } 00:13:45.698 ] 00:13:45.698 }' 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.698 10:24:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.698 [2024-11-19 10:24:59.477003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:45.957 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:45.957 Zero copy mechanism will not be used. 00:13:45.957 Running I/O for 60 seconds... 00:13:46.217 10:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:46.217 10:24:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.217 10:24:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.217 [2024-11-19 10:24:59.839909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:46.217 10:24:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.217 10:24:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:46.217 [2024-11-19 10:24:59.899712] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:13:46.217 [2024-11-19 10:24:59.901550] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:46.477 [2024-11-19 10:25:00.004098] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:46.477 [2024-11-19 10:25:00.005526] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:46.477 [2024-11-19 10:25:00.229080] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:46.477 [2024-11-19 10:25:00.229831] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:46.996 141.00 IOPS, 423.00 MiB/s [2024-11-19T10:25:00.777Z] [2024-11-19 10:25:00.574348] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:47.256 [2024-11-19 10:25:00.826780] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:47.256 10:25:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.256 10:25:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.256 10:25:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.256 10:25:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.256 10:25:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.256 10:25:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.256 10:25:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.256 10:25:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.256 10:25:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.256 10:25:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.256 10:25:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.256 "name": "raid_bdev1", 00:13:47.256 "uuid": "8414e4dd-b041-4ed3-a790-f08750a80ba1", 00:13:47.256 "strip_size_kb": 0, 00:13:47.256 "state": "online", 00:13:47.256 "raid_level": "raid1", 00:13:47.256 "superblock": false, 00:13:47.256 "num_base_bdevs": 4, 00:13:47.256 "num_base_bdevs_discovered": 4, 00:13:47.256 "num_base_bdevs_operational": 4, 00:13:47.256 "process": { 00:13:47.256 "type": "rebuild", 00:13:47.256 "target": "spare", 00:13:47.256 "progress": { 00:13:47.256 "blocks": 10240, 00:13:47.256 "percent": 15 00:13:47.256 } 00:13:47.256 }, 00:13:47.256 "base_bdevs_list": [ 00:13:47.256 { 00:13:47.256 "name": "spare", 00:13:47.256 "uuid": "c403c844-6af0-55c3-92ab-fd0281a4a4c7", 00:13:47.256 "is_configured": true, 00:13:47.256 "data_offset": 0, 00:13:47.256 "data_size": 65536 00:13:47.256 }, 00:13:47.256 { 00:13:47.256 "name": "BaseBdev2", 00:13:47.256 "uuid": "911bb9be-d67e-51db-914b-5199737bab0a", 00:13:47.256 "is_configured": true, 00:13:47.256 "data_offset": 0, 00:13:47.256 "data_size": 65536 00:13:47.256 }, 00:13:47.256 { 00:13:47.256 "name": "BaseBdev3", 00:13:47.256 "uuid": "353e9180-043e-52bd-8af3-16f3547ebd9f", 00:13:47.256 "is_configured": true, 00:13:47.256 "data_offset": 0, 00:13:47.256 "data_size": 65536 00:13:47.256 }, 00:13:47.256 { 00:13:47.256 "name": "BaseBdev4", 00:13:47.256 "uuid": "127a52e0-c2e0-5244-b759-c50e6e371c3e", 00:13:47.256 "is_configured": true, 00:13:47.256 "data_offset": 0, 00:13:47.256 "data_size": 65536 00:13:47.256 } 00:13:47.256 ] 00:13:47.256 }' 00:13:47.256 10:25:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.256 10:25:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.256 10:25:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.256 10:25:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.256 10:25:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:47.256 10:25:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.256 10:25:00 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.256 [2024-11-19 10:25:00.992823] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:47.516 [2024-11-19 10:25:01.073665] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:47.516 [2024-11-19 10:25:01.082447] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.516 [2024-11-19 10:25:01.082539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:47.516 [2024-11-19 10:25:01.082559] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:47.516 [2024-11-19 10:25:01.104467] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:13:47.516 10:25:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.516 10:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:47.516 10:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.516 10:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.516 10:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.516 10:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.516 10:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.516 10:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.516 10:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.516 10:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.516 10:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.516 10:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.516 10:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.516 10:25:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.516 10:25:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.516 10:25:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.516 10:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.516 "name": "raid_bdev1", 00:13:47.516 "uuid": "8414e4dd-b041-4ed3-a790-f08750a80ba1", 00:13:47.516 "strip_size_kb": 0, 00:13:47.516 "state": "online", 00:13:47.516 "raid_level": "raid1", 00:13:47.516 "superblock": false, 00:13:47.516 "num_base_bdevs": 4, 00:13:47.516 "num_base_bdevs_discovered": 3, 00:13:47.516 "num_base_bdevs_operational": 3, 00:13:47.516 "base_bdevs_list": [ 00:13:47.516 { 00:13:47.516 "name": null, 00:13:47.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.516 "is_configured": false, 00:13:47.516 "data_offset": 0, 00:13:47.516 "data_size": 65536 00:13:47.516 }, 00:13:47.516 { 00:13:47.516 "name": "BaseBdev2", 00:13:47.516 "uuid": "911bb9be-d67e-51db-914b-5199737bab0a", 00:13:47.516 "is_configured": true, 00:13:47.516 "data_offset": 0, 00:13:47.516 "data_size": 65536 00:13:47.516 }, 00:13:47.516 { 00:13:47.516 "name": "BaseBdev3", 00:13:47.516 "uuid": "353e9180-043e-52bd-8af3-16f3547ebd9f", 00:13:47.516 "is_configured": true, 00:13:47.516 "data_offset": 0, 00:13:47.516 "data_size": 65536 00:13:47.516 }, 00:13:47.516 { 00:13:47.516 "name": "BaseBdev4", 00:13:47.516 "uuid": "127a52e0-c2e0-5244-b759-c50e6e371c3e", 00:13:47.516 "is_configured": true, 00:13:47.516 "data_offset": 0, 00:13:47.516 "data_size": 65536 00:13:47.516 } 00:13:47.516 ] 00:13:47.516 }' 00:13:47.516 10:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.516 10:25:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.035 143.50 IOPS, 430.50 MiB/s [2024-11-19T10:25:01.816Z] 10:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:48.035 10:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.035 10:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:48.035 10:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:48.035 10:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.035 10:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.035 10:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.035 10:25:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.035 10:25:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.035 10:25:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.035 10:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.035 "name": "raid_bdev1", 00:13:48.035 "uuid": "8414e4dd-b041-4ed3-a790-f08750a80ba1", 00:13:48.035 "strip_size_kb": 0, 00:13:48.035 "state": "online", 00:13:48.035 "raid_level": "raid1", 00:13:48.035 "superblock": false, 00:13:48.035 "num_base_bdevs": 4, 00:13:48.035 "num_base_bdevs_discovered": 3, 00:13:48.035 "num_base_bdevs_operational": 3, 00:13:48.035 "base_bdevs_list": [ 00:13:48.035 { 00:13:48.035 "name": null, 00:13:48.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.035 "is_configured": false, 00:13:48.035 "data_offset": 0, 00:13:48.035 "data_size": 65536 00:13:48.035 }, 00:13:48.035 { 00:13:48.036 "name": "BaseBdev2", 00:13:48.036 "uuid": "911bb9be-d67e-51db-914b-5199737bab0a", 00:13:48.036 "is_configured": true, 00:13:48.036 "data_offset": 0, 00:13:48.036 "data_size": 65536 00:13:48.036 }, 00:13:48.036 { 00:13:48.036 "name": "BaseBdev3", 00:13:48.036 "uuid": "353e9180-043e-52bd-8af3-16f3547ebd9f", 00:13:48.036 "is_configured": true, 00:13:48.036 "data_offset": 0, 00:13:48.036 "data_size": 65536 00:13:48.036 }, 00:13:48.036 { 00:13:48.036 "name": "BaseBdev4", 00:13:48.036 "uuid": "127a52e0-c2e0-5244-b759-c50e6e371c3e", 00:13:48.036 "is_configured": true, 00:13:48.036 "data_offset": 0, 00:13:48.036 "data_size": 65536 00:13:48.036 } 00:13:48.036 ] 00:13:48.036 }' 00:13:48.036 10:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.036 10:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:48.036 10:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.036 10:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:48.036 10:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:48.036 10:25:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.036 10:25:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.036 [2024-11-19 10:25:01.691106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:48.036 10:25:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.036 10:25:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:48.036 [2024-11-19 10:25:01.750182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:48.036 [2024-11-19 10:25:01.752149] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:48.296 [2024-11-19 10:25:01.868275] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:48.296 [2024-11-19 10:25:01.868748] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:48.296 [2024-11-19 10:25:01.975711] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:48.296 [2024-11-19 10:25:01.976395] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:48.555 [2024-11-19 10:25:02.312560] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:48.815 140.00 IOPS, 420.00 MiB/s [2024-11-19T10:25:02.596Z] [2024-11-19 10:25:02.532692] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:48.815 [2024-11-19 10:25:02.532898] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:49.074 10:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.074 10:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.074 10:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.074 10:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.074 10:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.074 10:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.074 10:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.074 10:25:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.074 10:25:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.074 10:25:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.074 10:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.074 "name": "raid_bdev1", 00:13:49.074 "uuid": "8414e4dd-b041-4ed3-a790-f08750a80ba1", 00:13:49.074 "strip_size_kb": 0, 00:13:49.074 "state": "online", 00:13:49.074 "raid_level": "raid1", 00:13:49.074 "superblock": false, 00:13:49.074 "num_base_bdevs": 4, 00:13:49.074 "num_base_bdevs_discovered": 4, 00:13:49.074 "num_base_bdevs_operational": 4, 00:13:49.074 "process": { 00:13:49.074 "type": "rebuild", 00:13:49.074 "target": "spare", 00:13:49.074 "progress": { 00:13:49.074 "blocks": 10240, 00:13:49.074 "percent": 15 00:13:49.074 } 00:13:49.074 }, 00:13:49.074 "base_bdevs_list": [ 00:13:49.074 { 00:13:49.074 "name": "spare", 00:13:49.074 "uuid": "c403c844-6af0-55c3-92ab-fd0281a4a4c7", 00:13:49.074 "is_configured": true, 00:13:49.074 "data_offset": 0, 00:13:49.074 "data_size": 65536 00:13:49.074 }, 00:13:49.074 { 00:13:49.074 "name": "BaseBdev2", 00:13:49.074 "uuid": "911bb9be-d67e-51db-914b-5199737bab0a", 00:13:49.074 "is_configured": true, 00:13:49.075 "data_offset": 0, 00:13:49.075 "data_size": 65536 00:13:49.075 }, 00:13:49.075 { 00:13:49.075 "name": "BaseBdev3", 00:13:49.075 "uuid": "353e9180-043e-52bd-8af3-16f3547ebd9f", 00:13:49.075 "is_configured": true, 00:13:49.075 "data_offset": 0, 00:13:49.075 "data_size": 65536 00:13:49.075 }, 00:13:49.075 { 00:13:49.075 "name": "BaseBdev4", 00:13:49.075 "uuid": "127a52e0-c2e0-5244-b759-c50e6e371c3e", 00:13:49.075 "is_configured": true, 00:13:49.075 "data_offset": 0, 00:13:49.075 "data_size": 65536 00:13:49.075 } 00:13:49.075 ] 00:13:49.075 }' 00:13:49.075 10:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.075 10:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:49.075 10:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.335 10:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:49.335 10:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:49.335 10:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:49.335 10:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:49.335 10:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:49.335 10:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:49.335 10:25:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.335 10:25:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.335 [2024-11-19 10:25:02.880232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:49.335 [2024-11-19 10:25:02.893170] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:13:49.335 [2024-11-19 10:25:02.893208] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:13:49.335 10:25:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.335 10:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:49.335 10:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:49.335 10:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.335 10:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.335 10:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.335 10:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.335 10:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.335 10:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.335 10:25:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.335 10:25:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.335 10:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.335 10:25:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.335 10:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.335 "name": "raid_bdev1", 00:13:49.335 "uuid": "8414e4dd-b041-4ed3-a790-f08750a80ba1", 00:13:49.335 "strip_size_kb": 0, 00:13:49.335 "state": "online", 00:13:49.335 "raid_level": "raid1", 00:13:49.335 "superblock": false, 00:13:49.335 "num_base_bdevs": 4, 00:13:49.335 "num_base_bdevs_discovered": 3, 00:13:49.335 "num_base_bdevs_operational": 3, 00:13:49.335 "process": { 00:13:49.335 "type": "rebuild", 00:13:49.335 "target": "spare", 00:13:49.335 "progress": { 00:13:49.335 "blocks": 14336, 00:13:49.335 "percent": 21 00:13:49.335 } 00:13:49.335 }, 00:13:49.335 "base_bdevs_list": [ 00:13:49.335 { 00:13:49.335 "name": "spare", 00:13:49.335 "uuid": "c403c844-6af0-55c3-92ab-fd0281a4a4c7", 00:13:49.335 "is_configured": true, 00:13:49.335 "data_offset": 0, 00:13:49.335 "data_size": 65536 00:13:49.335 }, 00:13:49.335 { 00:13:49.335 "name": null, 00:13:49.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.335 "is_configured": false, 00:13:49.335 "data_offset": 0, 00:13:49.335 "data_size": 65536 00:13:49.335 }, 00:13:49.335 { 00:13:49.335 "name": "BaseBdev3", 00:13:49.335 "uuid": "353e9180-043e-52bd-8af3-16f3547ebd9f", 00:13:49.335 "is_configured": true, 00:13:49.335 "data_offset": 0, 00:13:49.335 "data_size": 65536 00:13:49.335 }, 00:13:49.335 { 00:13:49.335 "name": "BaseBdev4", 00:13:49.335 "uuid": "127a52e0-c2e0-5244-b759-c50e6e371c3e", 00:13:49.335 "is_configured": true, 00:13:49.335 "data_offset": 0, 00:13:49.335 "data_size": 65536 00:13:49.335 } 00:13:49.335 ] 00:13:49.335 }' 00:13:49.335 10:25:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.335 10:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:49.335 10:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.335 10:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:49.335 10:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=468 00:13:49.335 10:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:49.335 10:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.335 10:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.335 10:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.335 10:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.335 10:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.335 10:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.335 10:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.335 10:25:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.335 10:25:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.335 10:25:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.335 10:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.335 "name": "raid_bdev1", 00:13:49.335 "uuid": "8414e4dd-b041-4ed3-a790-f08750a80ba1", 00:13:49.335 "strip_size_kb": 0, 00:13:49.335 "state": "online", 00:13:49.335 "raid_level": "raid1", 00:13:49.335 "superblock": false, 00:13:49.335 "num_base_bdevs": 4, 00:13:49.336 "num_base_bdevs_discovered": 3, 00:13:49.336 "num_base_bdevs_operational": 3, 00:13:49.336 "process": { 00:13:49.336 "type": "rebuild", 00:13:49.336 "target": "spare", 00:13:49.336 "progress": { 00:13:49.336 "blocks": 16384, 00:13:49.336 "percent": 25 00:13:49.336 } 00:13:49.336 }, 00:13:49.336 "base_bdevs_list": [ 00:13:49.336 { 00:13:49.336 "name": "spare", 00:13:49.336 "uuid": "c403c844-6af0-55c3-92ab-fd0281a4a4c7", 00:13:49.336 "is_configured": true, 00:13:49.336 "data_offset": 0, 00:13:49.336 "data_size": 65536 00:13:49.336 }, 00:13:49.336 { 00:13:49.336 "name": null, 00:13:49.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.336 "is_configured": false, 00:13:49.336 "data_offset": 0, 00:13:49.336 "data_size": 65536 00:13:49.336 }, 00:13:49.336 { 00:13:49.336 "name": "BaseBdev3", 00:13:49.336 "uuid": "353e9180-043e-52bd-8af3-16f3547ebd9f", 00:13:49.336 "is_configured": true, 00:13:49.336 "data_offset": 0, 00:13:49.336 "data_size": 65536 00:13:49.336 }, 00:13:49.336 { 00:13:49.336 "name": "BaseBdev4", 00:13:49.336 "uuid": "127a52e0-c2e0-5244-b759-c50e6e371c3e", 00:13:49.336 "is_configured": true, 00:13:49.336 "data_offset": 0, 00:13:49.336 "data_size": 65536 00:13:49.336 } 00:13:49.336 ] 00:13:49.336 }' 00:13:49.336 10:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.596 10:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:49.596 10:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.596 10:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:49.596 10:25:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:49.596 [2024-11-19 10:25:03.266390] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:49.855 120.25 IOPS, 360.75 MiB/s [2024-11-19T10:25:03.636Z] [2024-11-19 10:25:03.596124] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:50.115 [2024-11-19 10:25:03.807000] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:50.115 [2024-11-19 10:25:03.807249] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:50.685 10:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:50.685 10:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:50.685 10:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.685 10:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:50.685 10:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:50.685 10:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.685 10:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.685 10:25:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.685 10:25:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.685 10:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.685 10:25:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.685 10:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.685 "name": "raid_bdev1", 00:13:50.685 "uuid": "8414e4dd-b041-4ed3-a790-f08750a80ba1", 00:13:50.685 "strip_size_kb": 0, 00:13:50.685 "state": "online", 00:13:50.685 "raid_level": "raid1", 00:13:50.685 "superblock": false, 00:13:50.685 "num_base_bdevs": 4, 00:13:50.685 "num_base_bdevs_discovered": 3, 00:13:50.685 "num_base_bdevs_operational": 3, 00:13:50.685 "process": { 00:13:50.685 "type": "rebuild", 00:13:50.685 "target": "spare", 00:13:50.685 "progress": { 00:13:50.685 "blocks": 32768, 00:13:50.685 "percent": 50 00:13:50.685 } 00:13:50.685 }, 00:13:50.685 "base_bdevs_list": [ 00:13:50.685 { 00:13:50.685 "name": "spare", 00:13:50.685 "uuid": "c403c844-6af0-55c3-92ab-fd0281a4a4c7", 00:13:50.685 "is_configured": true, 00:13:50.685 "data_offset": 0, 00:13:50.685 "data_size": 65536 00:13:50.685 }, 00:13:50.685 { 00:13:50.685 "name": null, 00:13:50.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.685 "is_configured": false, 00:13:50.685 "data_offset": 0, 00:13:50.685 "data_size": 65536 00:13:50.685 }, 00:13:50.685 { 00:13:50.685 "name": "BaseBdev3", 00:13:50.685 "uuid": "353e9180-043e-52bd-8af3-16f3547ebd9f", 00:13:50.685 "is_configured": true, 00:13:50.685 "data_offset": 0, 00:13:50.685 "data_size": 65536 00:13:50.685 }, 00:13:50.685 { 00:13:50.685 "name": "BaseBdev4", 00:13:50.685 "uuid": "127a52e0-c2e0-5244-b759-c50e6e371c3e", 00:13:50.685 "is_configured": true, 00:13:50.685 "data_offset": 0, 00:13:50.685 "data_size": 65536 00:13:50.685 } 00:13:50.685 ] 00:13:50.685 }' 00:13:50.685 10:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.685 10:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:50.685 10:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.685 10:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:50.685 10:25:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:50.945 [2024-11-19 10:25:04.469339] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:50.945 104.60 IOPS, 313.80 MiB/s [2024-11-19T10:25:04.726Z] [2024-11-19 10:25:04.576310] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:51.884 10:25:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:51.884 10:25:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:51.885 10:25:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.885 10:25:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:51.885 10:25:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:51.885 10:25:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.885 10:25:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.885 10:25:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.885 10:25:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.885 10:25:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.885 10:25:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.885 10:25:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.885 "name": "raid_bdev1", 00:13:51.885 "uuid": "8414e4dd-b041-4ed3-a790-f08750a80ba1", 00:13:51.885 "strip_size_kb": 0, 00:13:51.885 "state": "online", 00:13:51.885 "raid_level": "raid1", 00:13:51.885 "superblock": false, 00:13:51.885 "num_base_bdevs": 4, 00:13:51.885 "num_base_bdevs_discovered": 3, 00:13:51.885 "num_base_bdevs_operational": 3, 00:13:51.885 "process": { 00:13:51.885 "type": "rebuild", 00:13:51.885 "target": "spare", 00:13:51.885 "progress": { 00:13:51.885 "blocks": 53248, 00:13:51.885 "percent": 81 00:13:51.885 } 00:13:51.885 }, 00:13:51.885 "base_bdevs_list": [ 00:13:51.885 { 00:13:51.885 "name": "spare", 00:13:51.885 "uuid": "c403c844-6af0-55c3-92ab-fd0281a4a4c7", 00:13:51.885 "is_configured": true, 00:13:51.885 "data_offset": 0, 00:13:51.885 "data_size": 65536 00:13:51.885 }, 00:13:51.885 { 00:13:51.885 "name": null, 00:13:51.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.885 "is_configured": false, 00:13:51.885 "data_offset": 0, 00:13:51.885 "data_size": 65536 00:13:51.885 }, 00:13:51.885 { 00:13:51.885 "name": "BaseBdev3", 00:13:51.885 "uuid": "353e9180-043e-52bd-8af3-16f3547ebd9f", 00:13:51.885 "is_configured": true, 00:13:51.885 "data_offset": 0, 00:13:51.885 "data_size": 65536 00:13:51.885 }, 00:13:51.885 { 00:13:51.885 "name": "BaseBdev4", 00:13:51.885 "uuid": "127a52e0-c2e0-5244-b759-c50e6e371c3e", 00:13:51.885 "is_configured": true, 00:13:51.885 "data_offset": 0, 00:13:51.885 "data_size": 65536 00:13:51.885 } 00:13:51.885 ] 00:13:51.885 }' 00:13:51.885 10:25:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.885 10:25:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:51.885 10:25:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.885 10:25:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:51.885 10:25:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:52.155 94.83 IOPS, 284.50 MiB/s [2024-11-19T10:25:05.936Z] [2024-11-19 10:25:05.909044] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:52.430 [2024-11-19 10:25:06.008870] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:52.430 [2024-11-19 10:25:06.016809] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.689 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:52.689 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.689 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.689 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.689 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.689 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.948 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.948 10:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.948 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.948 10:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.948 86.14 IOPS, 258.43 MiB/s [2024-11-19T10:25:06.729Z] 10:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.948 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.948 "name": "raid_bdev1", 00:13:52.948 "uuid": "8414e4dd-b041-4ed3-a790-f08750a80ba1", 00:13:52.948 "strip_size_kb": 0, 00:13:52.948 "state": "online", 00:13:52.948 "raid_level": "raid1", 00:13:52.948 "superblock": false, 00:13:52.948 "num_base_bdevs": 4, 00:13:52.948 "num_base_bdevs_discovered": 3, 00:13:52.948 "num_base_bdevs_operational": 3, 00:13:52.948 "base_bdevs_list": [ 00:13:52.948 { 00:13:52.948 "name": "spare", 00:13:52.948 "uuid": "c403c844-6af0-55c3-92ab-fd0281a4a4c7", 00:13:52.948 "is_configured": true, 00:13:52.948 "data_offset": 0, 00:13:52.948 "data_size": 65536 00:13:52.948 }, 00:13:52.948 { 00:13:52.948 "name": null, 00:13:52.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.948 "is_configured": false, 00:13:52.948 "data_offset": 0, 00:13:52.948 "data_size": 65536 00:13:52.948 }, 00:13:52.948 { 00:13:52.948 "name": "BaseBdev3", 00:13:52.948 "uuid": "353e9180-043e-52bd-8af3-16f3547ebd9f", 00:13:52.948 "is_configured": true, 00:13:52.948 "data_offset": 0, 00:13:52.949 "data_size": 65536 00:13:52.949 }, 00:13:52.949 { 00:13:52.949 "name": "BaseBdev4", 00:13:52.949 "uuid": "127a52e0-c2e0-5244-b759-c50e6e371c3e", 00:13:52.949 "is_configured": true, 00:13:52.949 "data_offset": 0, 00:13:52.949 "data_size": 65536 00:13:52.949 } 00:13:52.949 ] 00:13:52.949 }' 00:13:52.949 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.949 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:52.949 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.949 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:52.949 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:52.949 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:52.949 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.949 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:52.949 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:52.949 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.949 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.949 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.949 10:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.949 10:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.949 10:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.949 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.949 "name": "raid_bdev1", 00:13:52.949 "uuid": "8414e4dd-b041-4ed3-a790-f08750a80ba1", 00:13:52.949 "strip_size_kb": 0, 00:13:52.949 "state": "online", 00:13:52.949 "raid_level": "raid1", 00:13:52.949 "superblock": false, 00:13:52.949 "num_base_bdevs": 4, 00:13:52.949 "num_base_bdevs_discovered": 3, 00:13:52.949 "num_base_bdevs_operational": 3, 00:13:52.949 "base_bdevs_list": [ 00:13:52.949 { 00:13:52.949 "name": "spare", 00:13:52.949 "uuid": "c403c844-6af0-55c3-92ab-fd0281a4a4c7", 00:13:52.949 "is_configured": true, 00:13:52.949 "data_offset": 0, 00:13:52.949 "data_size": 65536 00:13:52.949 }, 00:13:52.949 { 00:13:52.949 "name": null, 00:13:52.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.949 "is_configured": false, 00:13:52.949 "data_offset": 0, 00:13:52.949 "data_size": 65536 00:13:52.949 }, 00:13:52.949 { 00:13:52.949 "name": "BaseBdev3", 00:13:52.949 "uuid": "353e9180-043e-52bd-8af3-16f3547ebd9f", 00:13:52.949 "is_configured": true, 00:13:52.949 "data_offset": 0, 00:13:52.949 "data_size": 65536 00:13:52.949 }, 00:13:52.949 { 00:13:52.949 "name": "BaseBdev4", 00:13:52.949 "uuid": "127a52e0-c2e0-5244-b759-c50e6e371c3e", 00:13:52.949 "is_configured": true, 00:13:52.949 "data_offset": 0, 00:13:52.949 "data_size": 65536 00:13:52.949 } 00:13:52.949 ] 00:13:52.949 }' 00:13:52.949 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.949 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:52.949 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.949 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:52.949 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:52.949 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.949 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.949 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.949 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.949 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.949 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.949 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.949 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.949 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.208 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.208 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.208 10:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.208 10:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.208 10:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.208 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.208 "name": "raid_bdev1", 00:13:53.208 "uuid": "8414e4dd-b041-4ed3-a790-f08750a80ba1", 00:13:53.208 "strip_size_kb": 0, 00:13:53.208 "state": "online", 00:13:53.208 "raid_level": "raid1", 00:13:53.208 "superblock": false, 00:13:53.208 "num_base_bdevs": 4, 00:13:53.208 "num_base_bdevs_discovered": 3, 00:13:53.208 "num_base_bdevs_operational": 3, 00:13:53.208 "base_bdevs_list": [ 00:13:53.208 { 00:13:53.208 "name": "spare", 00:13:53.208 "uuid": "c403c844-6af0-55c3-92ab-fd0281a4a4c7", 00:13:53.208 "is_configured": true, 00:13:53.208 "data_offset": 0, 00:13:53.208 "data_size": 65536 00:13:53.208 }, 00:13:53.208 { 00:13:53.208 "name": null, 00:13:53.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.208 "is_configured": false, 00:13:53.208 "data_offset": 0, 00:13:53.208 "data_size": 65536 00:13:53.208 }, 00:13:53.208 { 00:13:53.208 "name": "BaseBdev3", 00:13:53.208 "uuid": "353e9180-043e-52bd-8af3-16f3547ebd9f", 00:13:53.208 "is_configured": true, 00:13:53.208 "data_offset": 0, 00:13:53.208 "data_size": 65536 00:13:53.208 }, 00:13:53.208 { 00:13:53.208 "name": "BaseBdev4", 00:13:53.208 "uuid": "127a52e0-c2e0-5244-b759-c50e6e371c3e", 00:13:53.208 "is_configured": true, 00:13:53.208 "data_offset": 0, 00:13:53.208 "data_size": 65536 00:13:53.208 } 00:13:53.208 ] 00:13:53.208 }' 00:13:53.208 10:25:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.208 10:25:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.468 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:53.468 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.468 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.468 [2024-11-19 10:25:07.147638] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:53.468 [2024-11-19 10:25:07.147725] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:53.468 00:13:53.468 Latency(us) 00:13:53.468 [2024-11-19T10:25:07.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:53.468 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:53.468 raid_bdev1 : 7.75 81.15 243.44 0.00 0.00 17876.63 316.59 116304.94 00:13:53.468 [2024-11-19T10:25:07.249Z] =================================================================================================================== 00:13:53.468 [2024-11-19T10:25:07.249Z] Total : 81.15 243.44 0.00 0.00 17876.63 316.59 116304.94 00:13:53.468 [2024-11-19 10:25:07.236286] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.468 [2024-11-19 10:25:07.236388] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:53.468 [2024-11-19 10:25:07.236515] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:53.468 [2024-11-19 10:25:07.236583] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:53.468 { 00:13:53.468 "results": [ 00:13:53.468 { 00:13:53.468 "job": "raid_bdev1", 00:13:53.468 "core_mask": "0x1", 00:13:53.468 "workload": "randrw", 00:13:53.468 "percentage": 50, 00:13:53.468 "status": "finished", 00:13:53.468 "queue_depth": 2, 00:13:53.468 "io_size": 3145728, 00:13:53.468 "runtime": 7.75143, 00:13:53.468 "iops": 81.14631751818696, 00:13:53.468 "mibps": 243.4389525545609, 00:13:53.468 "io_failed": 0, 00:13:53.468 "io_timeout": 0, 00:13:53.468 "avg_latency_us": 17876.625327510916, 00:13:53.468 "min_latency_us": 316.5903930131004, 00:13:53.468 "max_latency_us": 116304.93624454149 00:13:53.468 } 00:13:53.468 ], 00:13:53.468 "core_count": 1 00:13:53.468 } 00:13:53.468 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.468 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.728 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.728 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:53.728 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.728 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.728 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:53.728 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:53.728 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:53.728 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:53.728 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:53.728 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:53.728 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:53.728 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:53.728 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:53.728 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:53.728 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:53.728 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:53.728 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:53.728 /dev/nbd0 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:53.988 1+0 records in 00:13:53.988 1+0 records out 00:13:53.988 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604557 s, 6.8 MB/s 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:53.988 /dev/nbd1 00:13:53.988 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:54.248 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:54.248 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:54.248 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:54.248 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:54.248 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:54.248 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:54.248 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:54.248 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:54.248 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:54.248 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:54.248 1+0 records in 00:13:54.248 1+0 records out 00:13:54.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231591 s, 17.7 MB/s 00:13:54.248 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.248 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:54.248 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.248 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:54.248 10:25:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:54.248 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:54.248 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:54.248 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:54.248 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:54.248 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:54.248 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:54.248 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:54.248 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:54.248 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:54.248 10:25:07 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:54.508 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:54.508 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:54.508 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:54.508 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:54.508 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:54.508 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:54.508 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:54.508 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:54.508 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:54.508 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:54.508 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:54.508 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:54.508 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:54.508 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:54.508 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:54.508 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:54.508 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:54.508 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:54.508 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:54.508 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:54.768 /dev/nbd1 00:13:54.768 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:54.768 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:54.768 10:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:54.768 10:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:54.768 10:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:54.768 10:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:54.768 10:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:54.768 10:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:54.768 10:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:54.768 10:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:54.768 10:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:54.768 1+0 records in 00:13:54.768 1+0 records out 00:13:54.768 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019661 s, 20.8 MB/s 00:13:54.768 10:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.768 10:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:54.768 10:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.768 10:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:54.768 10:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:54.768 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:54.768 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:54.768 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:54.768 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:54.768 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:54.768 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:54.768 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:54.768 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:54.769 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:54.769 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:55.028 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:55.028 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:55.028 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:55.028 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:55.028 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:55.028 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:55.028 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:55.028 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:55.028 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:55.028 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:55.028 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:55.028 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:55.028 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:55.028 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:55.028 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:55.289 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:55.289 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:55.289 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:55.289 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:55.289 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:55.289 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:55.289 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:55.289 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:55.289 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:55.289 10:25:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78473 00:13:55.289 10:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78473 ']' 00:13:55.289 10:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78473 00:13:55.289 10:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:55.289 10:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:55.289 10:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78473 00:13:55.289 killing process with pid 78473 00:13:55.289 Received shutdown signal, test time was about 9.481812 seconds 00:13:55.289 00:13:55.289 Latency(us) 00:13:55.289 [2024-11-19T10:25:09.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:55.289 [2024-11-19T10:25:09.070Z] =================================================================================================================== 00:13:55.289 [2024-11-19T10:25:09.070Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:55.289 10:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:55.289 10:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:55.289 10:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78473' 00:13:55.289 10:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78473 00:13:55.289 [2024-11-19 10:25:08.942519] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:55.289 10:25:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78473 00:13:55.548 [2024-11-19 10:25:09.327719] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:56.930 ************************************ 00:13:56.930 END TEST raid_rebuild_test_io 00:13:56.930 ************************************ 00:13:56.930 10:25:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:56.930 00:13:56.930 real 0m12.717s 00:13:56.930 user 0m16.057s 00:13:56.930 sys 0m1.692s 00:13:56.930 10:25:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:56.930 10:25:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.930 10:25:10 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:13:56.930 10:25:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:56.930 10:25:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.930 10:25:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:56.930 ************************************ 00:13:56.930 START TEST raid_rebuild_test_sb_io 00:13:56.930 ************************************ 00:13:56.930 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:13:56.930 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:56.930 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:56.930 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:56.930 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:56.930 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:56.930 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:56.930 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:56.930 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:56.930 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:56.930 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:56.930 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:56.930 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:56.930 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:56.930 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:56.930 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:56.930 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:56.930 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:56.930 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:56.931 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:56.931 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:56.931 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:56.931 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:56.931 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:56.931 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:56.931 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:56.931 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:56.931 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:56.931 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:56.931 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:56.931 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:56.931 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78878 00:13:56.931 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78878 00:13:56.931 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:56.931 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 78878 ']' 00:13:56.931 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.931 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:56.931 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.931 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:56.931 10:25:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.931 [2024-11-19 10:25:10.561024] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:13:56.931 [2024-11-19 10:25:10.561231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:56.931 Zero copy mechanism will not be used. 00:13:56.931 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78878 ] 00:13:57.191 [2024-11-19 10:25:10.731410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.191 [2024-11-19 10:25:10.836091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.450 [2024-11-19 10:25:11.025891] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:57.450 [2024-11-19 10:25:11.025999] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:57.710 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:57.710 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:57.710 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:57.710 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:57.710 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.710 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.710 BaseBdev1_malloc 00:13:57.710 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.710 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:57.710 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.710 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.710 [2024-11-19 10:25:11.405728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:57.710 [2024-11-19 10:25:11.405810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.710 [2024-11-19 10:25:11.405833] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:57.710 [2024-11-19 10:25:11.405844] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.710 [2024-11-19 10:25:11.407878] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.710 [2024-11-19 10:25:11.407919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:57.710 BaseBdev1 00:13:57.710 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.710 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:57.710 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:57.710 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.710 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.710 BaseBdev2_malloc 00:13:57.710 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.710 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:57.710 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.710 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.710 [2024-11-19 10:25:11.457720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:57.710 [2024-11-19 10:25:11.457775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.710 [2024-11-19 10:25:11.457793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:57.710 [2024-11-19 10:25:11.457805] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.710 [2024-11-19 10:25:11.459866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.710 [2024-11-19 10:25:11.459960] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:57.710 BaseBdev2 00:13:57.710 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.710 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:57.710 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:57.710 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.710 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.970 BaseBdev3_malloc 00:13:57.970 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.970 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:57.970 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.970 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.970 [2024-11-19 10:25:11.544215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:57.970 [2024-11-19 10:25:11.544266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.970 [2024-11-19 10:25:11.544287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:57.970 [2024-11-19 10:25:11.544298] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.970 [2024-11-19 10:25:11.546249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.970 [2024-11-19 10:25:11.546349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:57.970 BaseBdev3 00:13:57.970 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.970 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:57.970 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:57.970 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.970 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.970 BaseBdev4_malloc 00:13:57.970 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.970 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:57.970 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.970 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.970 [2024-11-19 10:25:11.596772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:57.970 [2024-11-19 10:25:11.596821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.970 [2024-11-19 10:25:11.596852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:57.970 [2024-11-19 10:25:11.596862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.970 [2024-11-19 10:25:11.598841] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.970 [2024-11-19 10:25:11.598882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:57.970 BaseBdev4 00:13:57.970 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.970 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:57.970 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.970 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.970 spare_malloc 00:13:57.970 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.970 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:57.970 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.970 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.970 spare_delay 00:13:57.970 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.970 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:57.970 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.970 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.970 [2024-11-19 10:25:11.662023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:57.970 [2024-11-19 10:25:11.662074] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.970 [2024-11-19 10:25:11.662093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:57.970 [2024-11-19 10:25:11.662102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.970 [2024-11-19 10:25:11.664064] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.970 [2024-11-19 10:25:11.664152] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:57.970 spare 00:13:57.970 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.970 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:57.970 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.970 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.970 [2024-11-19 10:25:11.674045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:57.970 [2024-11-19 10:25:11.675780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:57.970 [2024-11-19 10:25:11.675847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:57.970 [2024-11-19 10:25:11.675896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:57.970 [2024-11-19 10:25:11.676072] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:57.970 [2024-11-19 10:25:11.676089] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:57.970 [2024-11-19 10:25:11.676308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:57.971 [2024-11-19 10:25:11.676475] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:57.971 [2024-11-19 10:25:11.676485] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:57.971 [2024-11-19 10:25:11.676627] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.971 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.971 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:57.971 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.971 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.971 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.971 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.971 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.971 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.971 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.971 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.971 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.971 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.971 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.971 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.971 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.971 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.971 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.971 "name": "raid_bdev1", 00:13:57.971 "uuid": "e7c44914-a18c-4e4a-932e-32c026df12d7", 00:13:57.971 "strip_size_kb": 0, 00:13:57.971 "state": "online", 00:13:57.971 "raid_level": "raid1", 00:13:57.971 "superblock": true, 00:13:57.971 "num_base_bdevs": 4, 00:13:57.971 "num_base_bdevs_discovered": 4, 00:13:57.971 "num_base_bdevs_operational": 4, 00:13:57.971 "base_bdevs_list": [ 00:13:57.971 { 00:13:57.971 "name": "BaseBdev1", 00:13:57.971 "uuid": "0118c82e-0bc5-5de3-834f-0c6404f95fdc", 00:13:57.971 "is_configured": true, 00:13:57.971 "data_offset": 2048, 00:13:57.971 "data_size": 63488 00:13:57.971 }, 00:13:57.971 { 00:13:57.971 "name": "BaseBdev2", 00:13:57.971 "uuid": "dffc286e-3327-584a-8846-3463fd04385e", 00:13:57.971 "is_configured": true, 00:13:57.971 "data_offset": 2048, 00:13:57.971 "data_size": 63488 00:13:57.971 }, 00:13:57.971 { 00:13:57.971 "name": "BaseBdev3", 00:13:57.971 "uuid": "1af145d7-bef6-50b9-b558-98f62ca59dda", 00:13:57.971 "is_configured": true, 00:13:57.971 "data_offset": 2048, 00:13:57.971 "data_size": 63488 00:13:57.971 }, 00:13:57.971 { 00:13:57.971 "name": "BaseBdev4", 00:13:57.971 "uuid": "ca767ef6-7cca-5714-bc2e-c0fbfb96eb6a", 00:13:57.971 "is_configured": true, 00:13:57.971 "data_offset": 2048, 00:13:57.971 "data_size": 63488 00:13:57.971 } 00:13:57.971 ] 00:13:57.971 }' 00:13:57.971 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.971 10:25:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.540 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:58.540 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:58.540 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.540 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.540 [2024-11-19 10:25:12.097584] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.541 [2024-11-19 10:25:12.185115] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.541 "name": "raid_bdev1", 00:13:58.541 "uuid": "e7c44914-a18c-4e4a-932e-32c026df12d7", 00:13:58.541 "strip_size_kb": 0, 00:13:58.541 "state": "online", 00:13:58.541 "raid_level": "raid1", 00:13:58.541 "superblock": true, 00:13:58.541 "num_base_bdevs": 4, 00:13:58.541 "num_base_bdevs_discovered": 3, 00:13:58.541 "num_base_bdevs_operational": 3, 00:13:58.541 "base_bdevs_list": [ 00:13:58.541 { 00:13:58.541 "name": null, 00:13:58.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.541 "is_configured": false, 00:13:58.541 "data_offset": 0, 00:13:58.541 "data_size": 63488 00:13:58.541 }, 00:13:58.541 { 00:13:58.541 "name": "BaseBdev2", 00:13:58.541 "uuid": "dffc286e-3327-584a-8846-3463fd04385e", 00:13:58.541 "is_configured": true, 00:13:58.541 "data_offset": 2048, 00:13:58.541 "data_size": 63488 00:13:58.541 }, 00:13:58.541 { 00:13:58.541 "name": "BaseBdev3", 00:13:58.541 "uuid": "1af145d7-bef6-50b9-b558-98f62ca59dda", 00:13:58.541 "is_configured": true, 00:13:58.541 "data_offset": 2048, 00:13:58.541 "data_size": 63488 00:13:58.541 }, 00:13:58.541 { 00:13:58.541 "name": "BaseBdev4", 00:13:58.541 "uuid": "ca767ef6-7cca-5714-bc2e-c0fbfb96eb6a", 00:13:58.541 "is_configured": true, 00:13:58.541 "data_offset": 2048, 00:13:58.541 "data_size": 63488 00:13:58.541 } 00:13:58.541 ] 00:13:58.541 }' 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.541 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.541 [2024-11-19 10:25:12.292512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:58.541 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:58.541 Zero copy mechanism will not be used. 00:13:58.541 Running I/O for 60 seconds... 00:13:59.110 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:59.110 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.110 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.110 [2024-11-19 10:25:12.598054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:59.110 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.110 10:25:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:59.110 [2024-11-19 10:25:12.649818] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:13:59.110 [2024-11-19 10:25:12.651710] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:59.370 [2024-11-19 10:25:12.907300] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:59.640 [2024-11-19 10:25:13.151433] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:59.640 [2024-11-19 10:25:13.152515] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:59.640 225.00 IOPS, 675.00 MiB/s [2024-11-19T10:25:13.421Z] [2024-11-19 10:25:13.368073] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:59.640 [2024-11-19 10:25:13.368340] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:59.924 [2024-11-19 10:25:13.605730] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:59.924 [2024-11-19 10:25:13.606958] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:59.924 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:59.924 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:59.924 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:59.924 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:59.924 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:59.924 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.924 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.924 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.924 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.924 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.924 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:59.924 "name": "raid_bdev1", 00:13:59.924 "uuid": "e7c44914-a18c-4e4a-932e-32c026df12d7", 00:13:59.924 "strip_size_kb": 0, 00:13:59.924 "state": "online", 00:13:59.924 "raid_level": "raid1", 00:13:59.924 "superblock": true, 00:13:59.924 "num_base_bdevs": 4, 00:13:59.924 "num_base_bdevs_discovered": 4, 00:13:59.924 "num_base_bdevs_operational": 4, 00:13:59.924 "process": { 00:13:59.924 "type": "rebuild", 00:13:59.924 "target": "spare", 00:13:59.924 "progress": { 00:13:59.924 "blocks": 14336, 00:13:59.924 "percent": 22 00:13:59.924 } 00:13:59.924 }, 00:13:59.924 "base_bdevs_list": [ 00:13:59.924 { 00:13:59.924 "name": "spare", 00:13:59.924 "uuid": "a946d280-5b60-582b-9013-6b4a93fbd04c", 00:13:59.924 "is_configured": true, 00:13:59.924 "data_offset": 2048, 00:13:59.924 "data_size": 63488 00:13:59.924 }, 00:13:59.924 { 00:13:59.924 "name": "BaseBdev2", 00:13:59.924 "uuid": "dffc286e-3327-584a-8846-3463fd04385e", 00:13:59.924 "is_configured": true, 00:13:59.924 "data_offset": 2048, 00:13:59.924 "data_size": 63488 00:13:59.924 }, 00:13:59.924 { 00:13:59.924 "name": "BaseBdev3", 00:13:59.924 "uuid": "1af145d7-bef6-50b9-b558-98f62ca59dda", 00:13:59.924 "is_configured": true, 00:13:59.924 "data_offset": 2048, 00:13:59.924 "data_size": 63488 00:13:59.924 }, 00:13:59.924 { 00:13:59.924 "name": "BaseBdev4", 00:13:59.924 "uuid": "ca767ef6-7cca-5714-bc2e-c0fbfb96eb6a", 00:13:59.924 "is_configured": true, 00:13:59.924 "data_offset": 2048, 00:13:59.924 "data_size": 63488 00:13:59.924 } 00:13:59.924 ] 00:13:59.924 }' 00:13:59.924 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.183 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:00.183 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.183 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:00.183 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:00.183 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.183 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.183 [2024-11-19 10:25:13.803751] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:00.183 [2024-11-19 10:25:13.816452] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:00.183 [2024-11-19 10:25:13.918755] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:00.183 [2024-11-19 10:25:13.922273] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.183 [2024-11-19 10:25:13.922347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:00.184 [2024-11-19 10:25:13.922376] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:00.184 [2024-11-19 10:25:13.945474] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:00.184 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.184 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:00.184 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.184 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.184 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.184 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.184 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.184 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.184 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.184 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.184 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.443 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.443 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.443 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.443 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.443 10:25:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.443 10:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.443 "name": "raid_bdev1", 00:14:00.443 "uuid": "e7c44914-a18c-4e4a-932e-32c026df12d7", 00:14:00.443 "strip_size_kb": 0, 00:14:00.443 "state": "online", 00:14:00.443 "raid_level": "raid1", 00:14:00.443 "superblock": true, 00:14:00.443 "num_base_bdevs": 4, 00:14:00.443 "num_base_bdevs_discovered": 3, 00:14:00.443 "num_base_bdevs_operational": 3, 00:14:00.443 "base_bdevs_list": [ 00:14:00.443 { 00:14:00.443 "name": null, 00:14:00.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.443 "is_configured": false, 00:14:00.443 "data_offset": 0, 00:14:00.443 "data_size": 63488 00:14:00.443 }, 00:14:00.443 { 00:14:00.443 "name": "BaseBdev2", 00:14:00.443 "uuid": "dffc286e-3327-584a-8846-3463fd04385e", 00:14:00.443 "is_configured": true, 00:14:00.443 "data_offset": 2048, 00:14:00.443 "data_size": 63488 00:14:00.443 }, 00:14:00.443 { 00:14:00.443 "name": "BaseBdev3", 00:14:00.443 "uuid": "1af145d7-bef6-50b9-b558-98f62ca59dda", 00:14:00.443 "is_configured": true, 00:14:00.443 "data_offset": 2048, 00:14:00.443 "data_size": 63488 00:14:00.443 }, 00:14:00.443 { 00:14:00.443 "name": "BaseBdev4", 00:14:00.443 "uuid": "ca767ef6-7cca-5714-bc2e-c0fbfb96eb6a", 00:14:00.443 "is_configured": true, 00:14:00.443 "data_offset": 2048, 00:14:00.443 "data_size": 63488 00:14:00.443 } 00:14:00.443 ] 00:14:00.443 }' 00:14:00.443 10:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.443 10:25:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.703 187.50 IOPS, 562.50 MiB/s [2024-11-19T10:25:14.484Z] 10:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:00.703 10:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.703 10:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:00.703 10:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:00.703 10:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.703 10:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.703 10:25:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.703 10:25:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.703 10:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.703 10:25:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.703 10:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.703 "name": "raid_bdev1", 00:14:00.703 "uuid": "e7c44914-a18c-4e4a-932e-32c026df12d7", 00:14:00.703 "strip_size_kb": 0, 00:14:00.703 "state": "online", 00:14:00.703 "raid_level": "raid1", 00:14:00.703 "superblock": true, 00:14:00.703 "num_base_bdevs": 4, 00:14:00.703 "num_base_bdevs_discovered": 3, 00:14:00.703 "num_base_bdevs_operational": 3, 00:14:00.703 "base_bdevs_list": [ 00:14:00.703 { 00:14:00.703 "name": null, 00:14:00.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.703 "is_configured": false, 00:14:00.703 "data_offset": 0, 00:14:00.703 "data_size": 63488 00:14:00.703 }, 00:14:00.703 { 00:14:00.703 "name": "BaseBdev2", 00:14:00.703 "uuid": "dffc286e-3327-584a-8846-3463fd04385e", 00:14:00.703 "is_configured": true, 00:14:00.703 "data_offset": 2048, 00:14:00.703 "data_size": 63488 00:14:00.703 }, 00:14:00.703 { 00:14:00.703 "name": "BaseBdev3", 00:14:00.703 "uuid": "1af145d7-bef6-50b9-b558-98f62ca59dda", 00:14:00.703 "is_configured": true, 00:14:00.703 "data_offset": 2048, 00:14:00.703 "data_size": 63488 00:14:00.703 }, 00:14:00.703 { 00:14:00.703 "name": "BaseBdev4", 00:14:00.703 "uuid": "ca767ef6-7cca-5714-bc2e-c0fbfb96eb6a", 00:14:00.703 "is_configured": true, 00:14:00.703 "data_offset": 2048, 00:14:00.703 "data_size": 63488 00:14:00.704 } 00:14:00.704 ] 00:14:00.704 }' 00:14:00.704 10:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.704 10:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:00.704 10:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.963 10:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:00.963 10:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:00.963 10:25:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.964 10:25:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.964 [2024-11-19 10:25:14.521568] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:00.964 10:25:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.964 10:25:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:00.964 [2024-11-19 10:25:14.574012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:00.964 [2024-11-19 10:25:14.575998] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:00.964 [2024-11-19 10:25:14.683792] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:00.964 [2024-11-19 10:25:14.684276] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:01.223 [2024-11-19 10:25:14.817799] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:01.223 [2024-11-19 10:25:14.818516] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:01.483 [2024-11-19 10:25:15.146940] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:01.743 [2024-11-19 10:25:15.268250] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:02.003 180.33 IOPS, 541.00 MiB/s [2024-11-19T10:25:15.784Z] 10:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.003 10:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.003 10:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.003 10:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.003 10:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.003 10:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.003 10:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.003 10:25:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.003 10:25:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.003 10:25:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.003 [2024-11-19 10:25:15.599278] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:02.003 [2024-11-19 10:25:15.599808] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:02.003 10:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.003 "name": "raid_bdev1", 00:14:02.003 "uuid": "e7c44914-a18c-4e4a-932e-32c026df12d7", 00:14:02.003 "strip_size_kb": 0, 00:14:02.003 "state": "online", 00:14:02.003 "raid_level": "raid1", 00:14:02.003 "superblock": true, 00:14:02.003 "num_base_bdevs": 4, 00:14:02.003 "num_base_bdevs_discovered": 4, 00:14:02.003 "num_base_bdevs_operational": 4, 00:14:02.003 "process": { 00:14:02.003 "type": "rebuild", 00:14:02.003 "target": "spare", 00:14:02.003 "progress": { 00:14:02.003 "blocks": 12288, 00:14:02.003 "percent": 19 00:14:02.003 } 00:14:02.003 }, 00:14:02.003 "base_bdevs_list": [ 00:14:02.003 { 00:14:02.003 "name": "spare", 00:14:02.003 "uuid": "a946d280-5b60-582b-9013-6b4a93fbd04c", 00:14:02.003 "is_configured": true, 00:14:02.003 "data_offset": 2048, 00:14:02.003 "data_size": 63488 00:14:02.003 }, 00:14:02.003 { 00:14:02.003 "name": "BaseBdev2", 00:14:02.003 "uuid": "dffc286e-3327-584a-8846-3463fd04385e", 00:14:02.003 "is_configured": true, 00:14:02.003 "data_offset": 2048, 00:14:02.003 "data_size": 63488 00:14:02.003 }, 00:14:02.003 { 00:14:02.003 "name": "BaseBdev3", 00:14:02.003 "uuid": "1af145d7-bef6-50b9-b558-98f62ca59dda", 00:14:02.003 "is_configured": true, 00:14:02.003 "data_offset": 2048, 00:14:02.003 "data_size": 63488 00:14:02.003 }, 00:14:02.003 { 00:14:02.003 "name": "BaseBdev4", 00:14:02.003 "uuid": "ca767ef6-7cca-5714-bc2e-c0fbfb96eb6a", 00:14:02.003 "is_configured": true, 00:14:02.003 "data_offset": 2048, 00:14:02.003 "data_size": 63488 00:14:02.003 } 00:14:02.003 ] 00:14:02.003 }' 00:14:02.003 10:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.003 10:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:02.003 10:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.003 10:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.003 10:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:02.003 10:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:02.003 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:02.003 10:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:02.003 10:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:02.003 10:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:02.003 10:25:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:02.003 10:25:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.003 10:25:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.003 [2024-11-19 10:25:15.682877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:02.263 [2024-11-19 10:25:15.810961] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:02.263 [2024-11-19 10:25:15.811829] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:02.263 [2024-11-19 10:25:16.014226] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:02.263 [2024-11-19 10:25:16.014353] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:02.263 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.263 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:02.263 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:02.263 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.263 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.263 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.263 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.263 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.263 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.263 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.263 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.263 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.523 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.523 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.523 "name": "raid_bdev1", 00:14:02.523 "uuid": "e7c44914-a18c-4e4a-932e-32c026df12d7", 00:14:02.523 "strip_size_kb": 0, 00:14:02.523 "state": "online", 00:14:02.523 "raid_level": "raid1", 00:14:02.523 "superblock": true, 00:14:02.523 "num_base_bdevs": 4, 00:14:02.523 "num_base_bdevs_discovered": 3, 00:14:02.523 "num_base_bdevs_operational": 3, 00:14:02.523 "process": { 00:14:02.523 "type": "rebuild", 00:14:02.523 "target": "spare", 00:14:02.523 "progress": { 00:14:02.523 "blocks": 16384, 00:14:02.523 "percent": 25 00:14:02.523 } 00:14:02.523 }, 00:14:02.523 "base_bdevs_list": [ 00:14:02.523 { 00:14:02.523 "name": "spare", 00:14:02.523 "uuid": "a946d280-5b60-582b-9013-6b4a93fbd04c", 00:14:02.523 "is_configured": true, 00:14:02.523 "data_offset": 2048, 00:14:02.523 "data_size": 63488 00:14:02.523 }, 00:14:02.523 { 00:14:02.523 "name": null, 00:14:02.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.523 "is_configured": false, 00:14:02.523 "data_offset": 0, 00:14:02.523 "data_size": 63488 00:14:02.523 }, 00:14:02.523 { 00:14:02.523 "name": "BaseBdev3", 00:14:02.523 "uuid": "1af145d7-bef6-50b9-b558-98f62ca59dda", 00:14:02.523 "is_configured": true, 00:14:02.523 "data_offset": 2048, 00:14:02.523 "data_size": 63488 00:14:02.523 }, 00:14:02.523 { 00:14:02.523 "name": "BaseBdev4", 00:14:02.523 "uuid": "ca767ef6-7cca-5714-bc2e-c0fbfb96eb6a", 00:14:02.523 "is_configured": true, 00:14:02.523 "data_offset": 2048, 00:14:02.523 "data_size": 63488 00:14:02.523 } 00:14:02.523 ] 00:14:02.523 }' 00:14:02.523 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.523 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:02.523 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.523 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.523 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=481 00:14:02.523 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:02.523 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.523 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.523 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.523 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.523 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.523 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.523 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.523 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.523 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.523 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.523 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.523 "name": "raid_bdev1", 00:14:02.523 "uuid": "e7c44914-a18c-4e4a-932e-32c026df12d7", 00:14:02.523 "strip_size_kb": 0, 00:14:02.523 "state": "online", 00:14:02.523 "raid_level": "raid1", 00:14:02.523 "superblock": true, 00:14:02.523 "num_base_bdevs": 4, 00:14:02.523 "num_base_bdevs_discovered": 3, 00:14:02.523 "num_base_bdevs_operational": 3, 00:14:02.523 "process": { 00:14:02.523 "type": "rebuild", 00:14:02.523 "target": "spare", 00:14:02.523 "progress": { 00:14:02.523 "blocks": 18432, 00:14:02.523 "percent": 29 00:14:02.523 } 00:14:02.523 }, 00:14:02.523 "base_bdevs_list": [ 00:14:02.523 { 00:14:02.523 "name": "spare", 00:14:02.523 "uuid": "a946d280-5b60-582b-9013-6b4a93fbd04c", 00:14:02.523 "is_configured": true, 00:14:02.523 "data_offset": 2048, 00:14:02.523 "data_size": 63488 00:14:02.523 }, 00:14:02.523 { 00:14:02.523 "name": null, 00:14:02.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.523 "is_configured": false, 00:14:02.523 "data_offset": 0, 00:14:02.523 "data_size": 63488 00:14:02.523 }, 00:14:02.523 { 00:14:02.523 "name": "BaseBdev3", 00:14:02.523 "uuid": "1af145d7-bef6-50b9-b558-98f62ca59dda", 00:14:02.523 "is_configured": true, 00:14:02.523 "data_offset": 2048, 00:14:02.523 "data_size": 63488 00:14:02.523 }, 00:14:02.523 { 00:14:02.523 "name": "BaseBdev4", 00:14:02.523 "uuid": "ca767ef6-7cca-5714-bc2e-c0fbfb96eb6a", 00:14:02.523 "is_configured": true, 00:14:02.523 "data_offset": 2048, 00:14:02.523 "data_size": 63488 00:14:02.523 } 00:14:02.523 ] 00:14:02.523 }' 00:14:02.523 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.523 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:02.523 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.523 [2024-11-19 10:25:16.248460] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:02.524 [2024-11-19 10:25:16.249530] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:02.524 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.524 10:25:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:02.783 148.50 IOPS, 445.50 MiB/s [2024-11-19T10:25:16.564Z] [2024-11-19 10:25:16.464369] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:02.783 [2024-11-19 10:25:16.464924] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:03.353 [2024-11-19 10:25:16.897861] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:03.614 [2024-11-19 10:25:17.221095] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:03.614 127.00 IOPS, 381.00 MiB/s [2024-11-19T10:25:17.395Z] 10:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:03.614 10:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.614 10:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.614 10:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.614 10:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.614 10:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.614 10:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.614 10:25:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.614 10:25:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.614 10:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.614 10:25:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.614 10:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.614 "name": "raid_bdev1", 00:14:03.614 "uuid": "e7c44914-a18c-4e4a-932e-32c026df12d7", 00:14:03.614 "strip_size_kb": 0, 00:14:03.614 "state": "online", 00:14:03.614 "raid_level": "raid1", 00:14:03.614 "superblock": true, 00:14:03.614 "num_base_bdevs": 4, 00:14:03.614 "num_base_bdevs_discovered": 3, 00:14:03.614 "num_base_bdevs_operational": 3, 00:14:03.614 "process": { 00:14:03.614 "type": "rebuild", 00:14:03.614 "target": "spare", 00:14:03.614 "progress": { 00:14:03.614 "blocks": 32768, 00:14:03.614 "percent": 51 00:14:03.614 } 00:14:03.614 }, 00:14:03.614 "base_bdevs_list": [ 00:14:03.614 { 00:14:03.614 "name": "spare", 00:14:03.614 "uuid": "a946d280-5b60-582b-9013-6b4a93fbd04c", 00:14:03.614 "is_configured": true, 00:14:03.614 "data_offset": 2048, 00:14:03.614 "data_size": 63488 00:14:03.614 }, 00:14:03.614 { 00:14:03.614 "name": null, 00:14:03.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.614 "is_configured": false, 00:14:03.614 "data_offset": 0, 00:14:03.614 "data_size": 63488 00:14:03.614 }, 00:14:03.614 { 00:14:03.614 "name": "BaseBdev3", 00:14:03.614 "uuid": "1af145d7-bef6-50b9-b558-98f62ca59dda", 00:14:03.614 "is_configured": true, 00:14:03.614 "data_offset": 2048, 00:14:03.614 "data_size": 63488 00:14:03.614 }, 00:14:03.614 { 00:14:03.614 "name": "BaseBdev4", 00:14:03.614 "uuid": "ca767ef6-7cca-5714-bc2e-c0fbfb96eb6a", 00:14:03.614 "is_configured": true, 00:14:03.614 "data_offset": 2048, 00:14:03.614 "data_size": 63488 00:14:03.614 } 00:14:03.614 ] 00:14:03.614 }' 00:14:03.614 10:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.874 10:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:03.874 10:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.874 10:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.874 10:25:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:03.874 [2024-11-19 10:25:17.551230] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:04.812 112.83 IOPS, 338.50 MiB/s [2024-11-19T10:25:18.593Z] [2024-11-19 10:25:18.443020] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:04.812 10:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:04.812 10:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.812 10:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.812 10:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.812 10:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.812 10:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.812 10:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.812 10:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.812 10:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.812 10:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.812 10:25:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.812 10:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.812 "name": "raid_bdev1", 00:14:04.812 "uuid": "e7c44914-a18c-4e4a-932e-32c026df12d7", 00:14:04.812 "strip_size_kb": 0, 00:14:04.812 "state": "online", 00:14:04.812 "raid_level": "raid1", 00:14:04.812 "superblock": true, 00:14:04.812 "num_base_bdevs": 4, 00:14:04.812 "num_base_bdevs_discovered": 3, 00:14:04.812 "num_base_bdevs_operational": 3, 00:14:04.812 "process": { 00:14:04.812 "type": "rebuild", 00:14:04.812 "target": "spare", 00:14:04.812 "progress": { 00:14:04.812 "blocks": 53248, 00:14:04.812 "percent": 83 00:14:04.812 } 00:14:04.812 }, 00:14:04.812 "base_bdevs_list": [ 00:14:04.812 { 00:14:04.812 "name": "spare", 00:14:04.812 "uuid": "a946d280-5b60-582b-9013-6b4a93fbd04c", 00:14:04.812 "is_configured": true, 00:14:04.812 "data_offset": 2048, 00:14:04.812 "data_size": 63488 00:14:04.812 }, 00:14:04.812 { 00:14:04.812 "name": null, 00:14:04.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.812 "is_configured": false, 00:14:04.812 "data_offset": 0, 00:14:04.812 "data_size": 63488 00:14:04.812 }, 00:14:04.812 { 00:14:04.812 "name": "BaseBdev3", 00:14:04.812 "uuid": "1af145d7-bef6-50b9-b558-98f62ca59dda", 00:14:04.812 "is_configured": true, 00:14:04.812 "data_offset": 2048, 00:14:04.812 "data_size": 63488 00:14:04.812 }, 00:14:04.812 { 00:14:04.812 "name": "BaseBdev4", 00:14:04.812 "uuid": "ca767ef6-7cca-5714-bc2e-c0fbfb96eb6a", 00:14:04.812 "is_configured": true, 00:14:04.812 "data_offset": 2048, 00:14:04.812 "data_size": 63488 00:14:04.812 } 00:14:04.812 ] 00:14:04.812 }' 00:14:04.812 10:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.812 10:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.812 10:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.070 10:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.070 10:25:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:05.328 [2024-11-19 10:25:19.080952] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:05.628 [2024-11-19 10:25:19.180779] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:05.628 [2024-11-19 10:25:19.188709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.901 101.86 IOPS, 305.57 MiB/s [2024-11-19T10:25:19.682Z] 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:05.901 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.901 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.901 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.901 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.901 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.901 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.901 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.901 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.901 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.901 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.901 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.901 "name": "raid_bdev1", 00:14:05.901 "uuid": "e7c44914-a18c-4e4a-932e-32c026df12d7", 00:14:05.901 "strip_size_kb": 0, 00:14:05.901 "state": "online", 00:14:05.901 "raid_level": "raid1", 00:14:05.901 "superblock": true, 00:14:05.901 "num_base_bdevs": 4, 00:14:05.901 "num_base_bdevs_discovered": 3, 00:14:05.901 "num_base_bdevs_operational": 3, 00:14:05.901 "base_bdevs_list": [ 00:14:05.901 { 00:14:05.901 "name": "spare", 00:14:05.901 "uuid": "a946d280-5b60-582b-9013-6b4a93fbd04c", 00:14:05.901 "is_configured": true, 00:14:05.901 "data_offset": 2048, 00:14:05.901 "data_size": 63488 00:14:05.901 }, 00:14:05.901 { 00:14:05.901 "name": null, 00:14:05.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.901 "is_configured": false, 00:14:05.901 "data_offset": 0, 00:14:05.901 "data_size": 63488 00:14:05.901 }, 00:14:05.901 { 00:14:05.901 "name": "BaseBdev3", 00:14:05.901 "uuid": "1af145d7-bef6-50b9-b558-98f62ca59dda", 00:14:05.901 "is_configured": true, 00:14:05.901 "data_offset": 2048, 00:14:05.901 "data_size": 63488 00:14:05.901 }, 00:14:05.901 { 00:14:05.901 "name": "BaseBdev4", 00:14:05.901 "uuid": "ca767ef6-7cca-5714-bc2e-c0fbfb96eb6a", 00:14:05.901 "is_configured": true, 00:14:05.901 "data_offset": 2048, 00:14:05.901 "data_size": 63488 00:14:05.901 } 00:14:05.901 ] 00:14:05.901 }' 00:14:05.901 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.161 "name": "raid_bdev1", 00:14:06.161 "uuid": "e7c44914-a18c-4e4a-932e-32c026df12d7", 00:14:06.161 "strip_size_kb": 0, 00:14:06.161 "state": "online", 00:14:06.161 "raid_level": "raid1", 00:14:06.161 "superblock": true, 00:14:06.161 "num_base_bdevs": 4, 00:14:06.161 "num_base_bdevs_discovered": 3, 00:14:06.161 "num_base_bdevs_operational": 3, 00:14:06.161 "base_bdevs_list": [ 00:14:06.161 { 00:14:06.161 "name": "spare", 00:14:06.161 "uuid": "a946d280-5b60-582b-9013-6b4a93fbd04c", 00:14:06.161 "is_configured": true, 00:14:06.161 "data_offset": 2048, 00:14:06.161 "data_size": 63488 00:14:06.161 }, 00:14:06.161 { 00:14:06.161 "name": null, 00:14:06.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.161 "is_configured": false, 00:14:06.161 "data_offset": 0, 00:14:06.161 "data_size": 63488 00:14:06.161 }, 00:14:06.161 { 00:14:06.161 "name": "BaseBdev3", 00:14:06.161 "uuid": "1af145d7-bef6-50b9-b558-98f62ca59dda", 00:14:06.161 "is_configured": true, 00:14:06.161 "data_offset": 2048, 00:14:06.161 "data_size": 63488 00:14:06.161 }, 00:14:06.161 { 00:14:06.161 "name": "BaseBdev4", 00:14:06.161 "uuid": "ca767ef6-7cca-5714-bc2e-c0fbfb96eb6a", 00:14:06.161 "is_configured": true, 00:14:06.161 "data_offset": 2048, 00:14:06.161 "data_size": 63488 00:14:06.161 } 00:14:06.161 ] 00:14:06.161 }' 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:06.161 "name": "raid_bdev1", 00:14:06.161 "uuid": "e7c44914-a18c-4e4a-932e-32c026df12d7", 00:14:06.161 "strip_size_kb": 0, 00:14:06.161 "state": "online", 00:14:06.161 "raid_level": "raid1", 00:14:06.161 "superblock": true, 00:14:06.161 "num_base_bdevs": 4, 00:14:06.161 "num_base_bdevs_discovered": 3, 00:14:06.161 "num_base_bdevs_operational": 3, 00:14:06.161 "base_bdevs_list": [ 00:14:06.161 { 00:14:06.161 "name": "spare", 00:14:06.161 "uuid": "a946d280-5b60-582b-9013-6b4a93fbd04c", 00:14:06.161 "is_configured": true, 00:14:06.161 "data_offset": 2048, 00:14:06.161 "data_size": 63488 00:14:06.161 }, 00:14:06.161 { 00:14:06.161 "name": null, 00:14:06.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.161 "is_configured": false, 00:14:06.161 "data_offset": 0, 00:14:06.161 "data_size": 63488 00:14:06.161 }, 00:14:06.161 { 00:14:06.161 "name": "BaseBdev3", 00:14:06.161 "uuid": "1af145d7-bef6-50b9-b558-98f62ca59dda", 00:14:06.161 "is_configured": true, 00:14:06.161 "data_offset": 2048, 00:14:06.161 "data_size": 63488 00:14:06.161 }, 00:14:06.161 { 00:14:06.161 "name": "BaseBdev4", 00:14:06.161 "uuid": "ca767ef6-7cca-5714-bc2e-c0fbfb96eb6a", 00:14:06.161 "is_configured": true, 00:14:06.161 "data_offset": 2048, 00:14:06.161 "data_size": 63488 00:14:06.161 } 00:14:06.161 ] 00:14:06.161 }' 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:06.161 10:25:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.730 93.75 IOPS, 281.25 MiB/s [2024-11-19T10:25:20.512Z] 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:06.731 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.731 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.731 [2024-11-19 10:25:20.308756] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:06.731 [2024-11-19 10:25:20.308790] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:06.731 00:14:06.731 Latency(us) 00:14:06.731 [2024-11-19T10:25:20.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.731 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:06.731 raid_bdev1 : 8.11 92.78 278.34 0.00 0.00 14541.94 304.07 115389.15 00:14:06.731 [2024-11-19T10:25:20.512Z] =================================================================================================================== 00:14:06.731 [2024-11-19T10:25:20.512Z] Total : 92.78 278.34 0.00 0.00 14541.94 304.07 115389.15 00:14:06.731 [2024-11-19 10:25:20.403663] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.731 [2024-11-19 10:25:20.403758] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.731 [2024-11-19 10:25:20.403864] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:06.731 [2024-11-19 10:25:20.403959] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:06.731 { 00:14:06.731 "results": [ 00:14:06.731 { 00:14:06.731 "job": "raid_bdev1", 00:14:06.731 "core_mask": "0x1", 00:14:06.731 "workload": "randrw", 00:14:06.731 "percentage": 50, 00:14:06.731 "status": "finished", 00:14:06.731 "queue_depth": 2, 00:14:06.731 "io_size": 3145728, 00:14:06.731 "runtime": 8.105139, 00:14:06.731 "iops": 92.7806420099643, 00:14:06.731 "mibps": 278.3419260298929, 00:14:06.731 "io_failed": 0, 00:14:06.731 "io_timeout": 0, 00:14:06.731 "avg_latency_us": 14541.938121341635, 00:14:06.731 "min_latency_us": 304.0698689956332, 00:14:06.731 "max_latency_us": 115389.14934497817 00:14:06.731 } 00:14:06.731 ], 00:14:06.731 "core_count": 1 00:14:06.731 } 00:14:06.731 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.731 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.731 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:06.731 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.731 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.731 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.731 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:06.731 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:06.731 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:06.731 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:06.731 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:06.731 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:06.731 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:06.731 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:06.731 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:06.731 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:06.731 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:06.731 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:06.731 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:06.991 /dev/nbd0 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.991 1+0 records in 00:14:06.991 1+0 records out 00:14:06.991 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373185 s, 11.0 MB/s 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:06.991 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:07.250 /dev/nbd1 00:14:07.250 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:07.250 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:07.250 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:07.250 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:07.250 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:07.250 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:07.250 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:07.250 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:07.250 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:07.250 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:07.250 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:07.250 1+0 records in 00:14:07.250 1+0 records out 00:14:07.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000533344 s, 7.7 MB/s 00:14:07.250 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.250 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:07.251 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.251 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:07.251 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:07.251 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.251 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.251 10:25:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:07.510 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:07.510 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.510 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:07.510 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:07.510 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:07.510 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.510 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:07.769 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:07.769 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:07.769 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:07.769 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:07.769 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:07.769 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:07.769 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:07.769 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:07.769 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:07.769 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:07.769 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:07.769 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.769 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:07.769 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:07.769 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:07.769 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:07.769 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:07.769 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:07.769 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.769 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:07.769 /dev/nbd1 00:14:07.769 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:07.769 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:07.769 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:07.769 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:07.769 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:07.769 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:07.769 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:07.769 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:07.769 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:07.769 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:07.769 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:08.029 1+0 records in 00:14:08.029 1+0 records out 00:14:08.029 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000538318 s, 7.6 MB/s 00:14:08.029 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.029 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:08.029 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.029 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:08.029 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:08.029 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:08.029 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:08.029 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:08.029 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:08.029 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:08.029 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:08.029 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:08.029 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:08.029 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.029 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:08.288 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:08.288 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:08.288 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:08.288 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.288 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.288 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:08.288 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:08.288 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.288 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:08.288 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:08.288 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:08.288 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:08.288 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:08.288 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.288 10:25:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:08.288 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:08.288 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:08.288 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:08.288 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.288 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.288 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:08.288 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:08.288 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.288 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:08.288 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:08.288 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.288 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.288 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.288 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:08.288 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.288 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.548 [2024-11-19 10:25:22.070507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:08.548 [2024-11-19 10:25:22.070606] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.548 [2024-11-19 10:25:22.070671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:08.548 [2024-11-19 10:25:22.070711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.548 [2024-11-19 10:25:22.072821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.548 [2024-11-19 10:25:22.072908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:08.548 [2024-11-19 10:25:22.073046] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:08.548 [2024-11-19 10:25:22.073135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:08.548 [2024-11-19 10:25:22.073295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:08.548 [2024-11-19 10:25:22.073453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:08.548 spare 00:14:08.548 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.548 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:08.548 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.548 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.548 [2024-11-19 10:25:22.173387] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:08.548 [2024-11-19 10:25:22.173415] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:08.548 [2024-11-19 10:25:22.173656] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:14:08.548 [2024-11-19 10:25:22.173821] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:08.548 [2024-11-19 10:25:22.173831] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:08.548 [2024-11-19 10:25:22.174012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.548 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.548 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:08.548 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.548 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.548 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.548 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.548 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.548 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.548 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.548 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.548 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.548 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.548 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.548 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.548 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.548 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.548 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.548 "name": "raid_bdev1", 00:14:08.548 "uuid": "e7c44914-a18c-4e4a-932e-32c026df12d7", 00:14:08.548 "strip_size_kb": 0, 00:14:08.548 "state": "online", 00:14:08.548 "raid_level": "raid1", 00:14:08.548 "superblock": true, 00:14:08.548 "num_base_bdevs": 4, 00:14:08.548 "num_base_bdevs_discovered": 3, 00:14:08.548 "num_base_bdevs_operational": 3, 00:14:08.548 "base_bdevs_list": [ 00:14:08.548 { 00:14:08.548 "name": "spare", 00:14:08.548 "uuid": "a946d280-5b60-582b-9013-6b4a93fbd04c", 00:14:08.548 "is_configured": true, 00:14:08.548 "data_offset": 2048, 00:14:08.548 "data_size": 63488 00:14:08.548 }, 00:14:08.548 { 00:14:08.548 "name": null, 00:14:08.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.548 "is_configured": false, 00:14:08.548 "data_offset": 2048, 00:14:08.548 "data_size": 63488 00:14:08.548 }, 00:14:08.548 { 00:14:08.548 "name": "BaseBdev3", 00:14:08.548 "uuid": "1af145d7-bef6-50b9-b558-98f62ca59dda", 00:14:08.548 "is_configured": true, 00:14:08.548 "data_offset": 2048, 00:14:08.548 "data_size": 63488 00:14:08.548 }, 00:14:08.548 { 00:14:08.548 "name": "BaseBdev4", 00:14:08.548 "uuid": "ca767ef6-7cca-5714-bc2e-c0fbfb96eb6a", 00:14:08.548 "is_configured": true, 00:14:08.548 "data_offset": 2048, 00:14:08.548 "data_size": 63488 00:14:08.548 } 00:14:08.548 ] 00:14:08.548 }' 00:14:08.548 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.548 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.117 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:09.117 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.117 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:09.117 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:09.117 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.117 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.117 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.117 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.117 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.117 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.117 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.117 "name": "raid_bdev1", 00:14:09.117 "uuid": "e7c44914-a18c-4e4a-932e-32c026df12d7", 00:14:09.117 "strip_size_kb": 0, 00:14:09.117 "state": "online", 00:14:09.117 "raid_level": "raid1", 00:14:09.117 "superblock": true, 00:14:09.117 "num_base_bdevs": 4, 00:14:09.117 "num_base_bdevs_discovered": 3, 00:14:09.117 "num_base_bdevs_operational": 3, 00:14:09.117 "base_bdevs_list": [ 00:14:09.117 { 00:14:09.117 "name": "spare", 00:14:09.117 "uuid": "a946d280-5b60-582b-9013-6b4a93fbd04c", 00:14:09.117 "is_configured": true, 00:14:09.117 "data_offset": 2048, 00:14:09.117 "data_size": 63488 00:14:09.117 }, 00:14:09.117 { 00:14:09.117 "name": null, 00:14:09.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.117 "is_configured": false, 00:14:09.117 "data_offset": 2048, 00:14:09.117 "data_size": 63488 00:14:09.117 }, 00:14:09.117 { 00:14:09.117 "name": "BaseBdev3", 00:14:09.117 "uuid": "1af145d7-bef6-50b9-b558-98f62ca59dda", 00:14:09.117 "is_configured": true, 00:14:09.117 "data_offset": 2048, 00:14:09.117 "data_size": 63488 00:14:09.117 }, 00:14:09.117 { 00:14:09.117 "name": "BaseBdev4", 00:14:09.117 "uuid": "ca767ef6-7cca-5714-bc2e-c0fbfb96eb6a", 00:14:09.117 "is_configured": true, 00:14:09.118 "data_offset": 2048, 00:14:09.118 "data_size": 63488 00:14:09.118 } 00:14:09.118 ] 00:14:09.118 }' 00:14:09.118 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.118 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:09.118 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.118 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:09.118 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.118 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:09.118 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.118 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.118 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.118 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.118 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:09.118 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.118 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.118 [2024-11-19 10:25:22.845317] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:09.118 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.118 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:09.118 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.118 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.118 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:09.118 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:09.118 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:09.118 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.118 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.118 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.118 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.118 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.118 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.118 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.118 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.118 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.378 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.378 "name": "raid_bdev1", 00:14:09.378 "uuid": "e7c44914-a18c-4e4a-932e-32c026df12d7", 00:14:09.378 "strip_size_kb": 0, 00:14:09.378 "state": "online", 00:14:09.378 "raid_level": "raid1", 00:14:09.378 "superblock": true, 00:14:09.378 "num_base_bdevs": 4, 00:14:09.378 "num_base_bdevs_discovered": 2, 00:14:09.378 "num_base_bdevs_operational": 2, 00:14:09.378 "base_bdevs_list": [ 00:14:09.378 { 00:14:09.378 "name": null, 00:14:09.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.378 "is_configured": false, 00:14:09.378 "data_offset": 0, 00:14:09.378 "data_size": 63488 00:14:09.378 }, 00:14:09.378 { 00:14:09.378 "name": null, 00:14:09.378 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.378 "is_configured": false, 00:14:09.378 "data_offset": 2048, 00:14:09.378 "data_size": 63488 00:14:09.378 }, 00:14:09.378 { 00:14:09.378 "name": "BaseBdev3", 00:14:09.378 "uuid": "1af145d7-bef6-50b9-b558-98f62ca59dda", 00:14:09.378 "is_configured": true, 00:14:09.378 "data_offset": 2048, 00:14:09.378 "data_size": 63488 00:14:09.378 }, 00:14:09.378 { 00:14:09.378 "name": "BaseBdev4", 00:14:09.378 "uuid": "ca767ef6-7cca-5714-bc2e-c0fbfb96eb6a", 00:14:09.378 "is_configured": true, 00:14:09.378 "data_offset": 2048, 00:14:09.378 "data_size": 63488 00:14:09.378 } 00:14:09.378 ] 00:14:09.378 }' 00:14:09.378 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.378 10:25:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.637 10:25:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:09.637 10:25:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.637 10:25:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.637 [2024-11-19 10:25:23.320593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:09.637 [2024-11-19 10:25:23.320758] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:09.637 [2024-11-19 10:25:23.320775] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:09.637 [2024-11-19 10:25:23.320811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:09.637 [2024-11-19 10:25:23.334565] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:14:09.637 10:25:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.637 10:25:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:09.637 [2024-11-19 10:25:23.336394] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:10.574 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:10.574 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.574 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:10.574 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:10.574 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.574 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.574 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.574 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.574 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.834 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.834 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.834 "name": "raid_bdev1", 00:14:10.834 "uuid": "e7c44914-a18c-4e4a-932e-32c026df12d7", 00:14:10.834 "strip_size_kb": 0, 00:14:10.834 "state": "online", 00:14:10.834 "raid_level": "raid1", 00:14:10.834 "superblock": true, 00:14:10.834 "num_base_bdevs": 4, 00:14:10.834 "num_base_bdevs_discovered": 3, 00:14:10.834 "num_base_bdevs_operational": 3, 00:14:10.834 "process": { 00:14:10.834 "type": "rebuild", 00:14:10.834 "target": "spare", 00:14:10.834 "progress": { 00:14:10.834 "blocks": 20480, 00:14:10.834 "percent": 32 00:14:10.834 } 00:14:10.834 }, 00:14:10.834 "base_bdevs_list": [ 00:14:10.834 { 00:14:10.834 "name": "spare", 00:14:10.834 "uuid": "a946d280-5b60-582b-9013-6b4a93fbd04c", 00:14:10.834 "is_configured": true, 00:14:10.834 "data_offset": 2048, 00:14:10.834 "data_size": 63488 00:14:10.834 }, 00:14:10.834 { 00:14:10.834 "name": null, 00:14:10.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.834 "is_configured": false, 00:14:10.834 "data_offset": 2048, 00:14:10.834 "data_size": 63488 00:14:10.834 }, 00:14:10.834 { 00:14:10.834 "name": "BaseBdev3", 00:14:10.834 "uuid": "1af145d7-bef6-50b9-b558-98f62ca59dda", 00:14:10.834 "is_configured": true, 00:14:10.834 "data_offset": 2048, 00:14:10.834 "data_size": 63488 00:14:10.834 }, 00:14:10.834 { 00:14:10.834 "name": "BaseBdev4", 00:14:10.834 "uuid": "ca767ef6-7cca-5714-bc2e-c0fbfb96eb6a", 00:14:10.834 "is_configured": true, 00:14:10.834 "data_offset": 2048, 00:14:10.834 "data_size": 63488 00:14:10.834 } 00:14:10.834 ] 00:14:10.834 }' 00:14:10.834 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.834 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:10.834 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.834 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:10.834 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:10.834 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.834 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.834 [2024-11-19 10:25:24.504134] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:10.834 [2024-11-19 10:25:24.541025] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:10.834 [2024-11-19 10:25:24.541102] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.834 [2024-11-19 10:25:24.541118] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:10.834 [2024-11-19 10:25:24.541127] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:10.834 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.834 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:10.834 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.834 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.834 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.834 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.834 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:10.834 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.834 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.834 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.834 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.834 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.834 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.834 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.834 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.834 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.093 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.093 "name": "raid_bdev1", 00:14:11.093 "uuid": "e7c44914-a18c-4e4a-932e-32c026df12d7", 00:14:11.093 "strip_size_kb": 0, 00:14:11.093 "state": "online", 00:14:11.093 "raid_level": "raid1", 00:14:11.093 "superblock": true, 00:14:11.093 "num_base_bdevs": 4, 00:14:11.093 "num_base_bdevs_discovered": 2, 00:14:11.093 "num_base_bdevs_operational": 2, 00:14:11.093 "base_bdevs_list": [ 00:14:11.093 { 00:14:11.093 "name": null, 00:14:11.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.093 "is_configured": false, 00:14:11.093 "data_offset": 0, 00:14:11.093 "data_size": 63488 00:14:11.093 }, 00:14:11.093 { 00:14:11.093 "name": null, 00:14:11.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.093 "is_configured": false, 00:14:11.093 "data_offset": 2048, 00:14:11.093 "data_size": 63488 00:14:11.093 }, 00:14:11.093 { 00:14:11.093 "name": "BaseBdev3", 00:14:11.093 "uuid": "1af145d7-bef6-50b9-b558-98f62ca59dda", 00:14:11.093 "is_configured": true, 00:14:11.093 "data_offset": 2048, 00:14:11.093 "data_size": 63488 00:14:11.093 }, 00:14:11.093 { 00:14:11.093 "name": "BaseBdev4", 00:14:11.093 "uuid": "ca767ef6-7cca-5714-bc2e-c0fbfb96eb6a", 00:14:11.093 "is_configured": true, 00:14:11.093 "data_offset": 2048, 00:14:11.093 "data_size": 63488 00:14:11.093 } 00:14:11.093 ] 00:14:11.093 }' 00:14:11.093 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.093 10:25:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.353 10:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:11.353 10:25:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.353 10:25:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.353 [2024-11-19 10:25:25.024473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:11.353 [2024-11-19 10:25:25.024578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.353 [2024-11-19 10:25:25.024605] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:11.353 [2024-11-19 10:25:25.024616] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.353 [2024-11-19 10:25:25.025084] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.353 [2024-11-19 10:25:25.025106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:11.353 [2024-11-19 10:25:25.025193] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:11.353 [2024-11-19 10:25:25.025208] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:11.353 [2024-11-19 10:25:25.025218] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:11.353 [2024-11-19 10:25:25.025242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:11.353 [2024-11-19 10:25:25.039571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:14:11.353 spare 00:14:11.353 10:25:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.353 10:25:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:11.353 [2024-11-19 10:25:25.041402] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:12.291 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:12.291 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.291 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:12.291 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:12.291 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.291 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.291 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.291 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.291 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.550 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.550 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.550 "name": "raid_bdev1", 00:14:12.550 "uuid": "e7c44914-a18c-4e4a-932e-32c026df12d7", 00:14:12.550 "strip_size_kb": 0, 00:14:12.550 "state": "online", 00:14:12.550 "raid_level": "raid1", 00:14:12.550 "superblock": true, 00:14:12.550 "num_base_bdevs": 4, 00:14:12.550 "num_base_bdevs_discovered": 3, 00:14:12.550 "num_base_bdevs_operational": 3, 00:14:12.550 "process": { 00:14:12.550 "type": "rebuild", 00:14:12.550 "target": "spare", 00:14:12.550 "progress": { 00:14:12.550 "blocks": 20480, 00:14:12.550 "percent": 32 00:14:12.550 } 00:14:12.550 }, 00:14:12.550 "base_bdevs_list": [ 00:14:12.550 { 00:14:12.550 "name": "spare", 00:14:12.550 "uuid": "a946d280-5b60-582b-9013-6b4a93fbd04c", 00:14:12.550 "is_configured": true, 00:14:12.550 "data_offset": 2048, 00:14:12.550 "data_size": 63488 00:14:12.550 }, 00:14:12.550 { 00:14:12.550 "name": null, 00:14:12.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.550 "is_configured": false, 00:14:12.550 "data_offset": 2048, 00:14:12.550 "data_size": 63488 00:14:12.550 }, 00:14:12.550 { 00:14:12.550 "name": "BaseBdev3", 00:14:12.550 "uuid": "1af145d7-bef6-50b9-b558-98f62ca59dda", 00:14:12.550 "is_configured": true, 00:14:12.550 "data_offset": 2048, 00:14:12.550 "data_size": 63488 00:14:12.550 }, 00:14:12.550 { 00:14:12.550 "name": "BaseBdev4", 00:14:12.550 "uuid": "ca767ef6-7cca-5714-bc2e-c0fbfb96eb6a", 00:14:12.550 "is_configured": true, 00:14:12.550 "data_offset": 2048, 00:14:12.550 "data_size": 63488 00:14:12.550 } 00:14:12.550 ] 00:14:12.550 }' 00:14:12.550 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.550 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:12.550 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.550 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:12.550 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:12.550 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.550 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.550 [2024-11-19 10:25:26.189146] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:12.550 [2024-11-19 10:25:26.245984] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:12.550 [2024-11-19 10:25:26.246066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.550 [2024-11-19 10:25:26.246086] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:12.550 [2024-11-19 10:25:26.246093] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:12.550 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.550 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:12.550 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.550 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.550 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.550 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.550 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:12.550 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.551 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.551 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.551 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.551 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.551 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.551 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.551 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.551 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.551 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.551 "name": "raid_bdev1", 00:14:12.551 "uuid": "e7c44914-a18c-4e4a-932e-32c026df12d7", 00:14:12.551 "strip_size_kb": 0, 00:14:12.551 "state": "online", 00:14:12.551 "raid_level": "raid1", 00:14:12.551 "superblock": true, 00:14:12.551 "num_base_bdevs": 4, 00:14:12.551 "num_base_bdevs_discovered": 2, 00:14:12.551 "num_base_bdevs_operational": 2, 00:14:12.551 "base_bdevs_list": [ 00:14:12.551 { 00:14:12.551 "name": null, 00:14:12.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.551 "is_configured": false, 00:14:12.551 "data_offset": 0, 00:14:12.551 "data_size": 63488 00:14:12.551 }, 00:14:12.551 { 00:14:12.551 "name": null, 00:14:12.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.551 "is_configured": false, 00:14:12.551 "data_offset": 2048, 00:14:12.551 "data_size": 63488 00:14:12.551 }, 00:14:12.551 { 00:14:12.551 "name": "BaseBdev3", 00:14:12.551 "uuid": "1af145d7-bef6-50b9-b558-98f62ca59dda", 00:14:12.551 "is_configured": true, 00:14:12.551 "data_offset": 2048, 00:14:12.551 "data_size": 63488 00:14:12.551 }, 00:14:12.551 { 00:14:12.551 "name": "BaseBdev4", 00:14:12.551 "uuid": "ca767ef6-7cca-5714-bc2e-c0fbfb96eb6a", 00:14:12.551 "is_configured": true, 00:14:12.551 "data_offset": 2048, 00:14:12.551 "data_size": 63488 00:14:12.551 } 00:14:12.551 ] 00:14:12.551 }' 00:14:12.551 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.551 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.121 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:13.121 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.121 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:13.121 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:13.121 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.121 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.121 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.121 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.121 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.121 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.121 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.121 "name": "raid_bdev1", 00:14:13.121 "uuid": "e7c44914-a18c-4e4a-932e-32c026df12d7", 00:14:13.121 "strip_size_kb": 0, 00:14:13.121 "state": "online", 00:14:13.121 "raid_level": "raid1", 00:14:13.121 "superblock": true, 00:14:13.121 "num_base_bdevs": 4, 00:14:13.121 "num_base_bdevs_discovered": 2, 00:14:13.121 "num_base_bdevs_operational": 2, 00:14:13.121 "base_bdevs_list": [ 00:14:13.121 { 00:14:13.121 "name": null, 00:14:13.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.121 "is_configured": false, 00:14:13.121 "data_offset": 0, 00:14:13.121 "data_size": 63488 00:14:13.121 }, 00:14:13.121 { 00:14:13.121 "name": null, 00:14:13.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.121 "is_configured": false, 00:14:13.121 "data_offset": 2048, 00:14:13.121 "data_size": 63488 00:14:13.121 }, 00:14:13.121 { 00:14:13.121 "name": "BaseBdev3", 00:14:13.121 "uuid": "1af145d7-bef6-50b9-b558-98f62ca59dda", 00:14:13.121 "is_configured": true, 00:14:13.121 "data_offset": 2048, 00:14:13.121 "data_size": 63488 00:14:13.121 }, 00:14:13.121 { 00:14:13.121 "name": "BaseBdev4", 00:14:13.121 "uuid": "ca767ef6-7cca-5714-bc2e-c0fbfb96eb6a", 00:14:13.121 "is_configured": true, 00:14:13.121 "data_offset": 2048, 00:14:13.121 "data_size": 63488 00:14:13.121 } 00:14:13.121 ] 00:14:13.121 }' 00:14:13.121 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.121 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:13.121 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.121 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:13.121 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:13.121 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.121 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.121 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.121 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:13.121 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.121 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.121 [2024-11-19 10:25:26.840470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:13.121 [2024-11-19 10:25:26.840564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.121 [2024-11-19 10:25:26.840602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:14:13.121 [2024-11-19 10:25:26.840630] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.121 [2024-11-19 10:25:26.841089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.121 [2024-11-19 10:25:26.841144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:13.121 [2024-11-19 10:25:26.841253] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:13.121 [2024-11-19 10:25:26.841293] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:13.121 [2024-11-19 10:25:26.841337] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:13.121 [2024-11-19 10:25:26.841371] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:13.121 BaseBdev1 00:14:13.121 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.121 10:25:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:14.500 10:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:14.500 10:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.500 10:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.500 10:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.500 10:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.500 10:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:14.500 10:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.500 10:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.500 10:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.500 10:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.500 10:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.500 10:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.500 10:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.500 10:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.500 10:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.500 10:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.500 "name": "raid_bdev1", 00:14:14.500 "uuid": "e7c44914-a18c-4e4a-932e-32c026df12d7", 00:14:14.500 "strip_size_kb": 0, 00:14:14.500 "state": "online", 00:14:14.500 "raid_level": "raid1", 00:14:14.500 "superblock": true, 00:14:14.500 "num_base_bdevs": 4, 00:14:14.500 "num_base_bdevs_discovered": 2, 00:14:14.500 "num_base_bdevs_operational": 2, 00:14:14.500 "base_bdevs_list": [ 00:14:14.500 { 00:14:14.500 "name": null, 00:14:14.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.500 "is_configured": false, 00:14:14.500 "data_offset": 0, 00:14:14.500 "data_size": 63488 00:14:14.500 }, 00:14:14.500 { 00:14:14.500 "name": null, 00:14:14.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.500 "is_configured": false, 00:14:14.500 "data_offset": 2048, 00:14:14.500 "data_size": 63488 00:14:14.500 }, 00:14:14.500 { 00:14:14.500 "name": "BaseBdev3", 00:14:14.500 "uuid": "1af145d7-bef6-50b9-b558-98f62ca59dda", 00:14:14.500 "is_configured": true, 00:14:14.500 "data_offset": 2048, 00:14:14.500 "data_size": 63488 00:14:14.500 }, 00:14:14.500 { 00:14:14.500 "name": "BaseBdev4", 00:14:14.500 "uuid": "ca767ef6-7cca-5714-bc2e-c0fbfb96eb6a", 00:14:14.500 "is_configured": true, 00:14:14.500 "data_offset": 2048, 00:14:14.500 "data_size": 63488 00:14:14.500 } 00:14:14.500 ] 00:14:14.500 }' 00:14:14.500 10:25:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.500 10:25:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.500 10:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:14.500 10:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.500 10:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:14.500 10:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:14.500 10:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.500 10:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.500 10:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.500 10:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.500 10:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.500 10:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.760 10:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.760 "name": "raid_bdev1", 00:14:14.760 "uuid": "e7c44914-a18c-4e4a-932e-32c026df12d7", 00:14:14.760 "strip_size_kb": 0, 00:14:14.760 "state": "online", 00:14:14.760 "raid_level": "raid1", 00:14:14.760 "superblock": true, 00:14:14.760 "num_base_bdevs": 4, 00:14:14.760 "num_base_bdevs_discovered": 2, 00:14:14.760 "num_base_bdevs_operational": 2, 00:14:14.760 "base_bdevs_list": [ 00:14:14.760 { 00:14:14.760 "name": null, 00:14:14.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.760 "is_configured": false, 00:14:14.760 "data_offset": 0, 00:14:14.760 "data_size": 63488 00:14:14.760 }, 00:14:14.760 { 00:14:14.760 "name": null, 00:14:14.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.760 "is_configured": false, 00:14:14.760 "data_offset": 2048, 00:14:14.760 "data_size": 63488 00:14:14.760 }, 00:14:14.760 { 00:14:14.760 "name": "BaseBdev3", 00:14:14.760 "uuid": "1af145d7-bef6-50b9-b558-98f62ca59dda", 00:14:14.760 "is_configured": true, 00:14:14.760 "data_offset": 2048, 00:14:14.760 "data_size": 63488 00:14:14.760 }, 00:14:14.760 { 00:14:14.760 "name": "BaseBdev4", 00:14:14.760 "uuid": "ca767ef6-7cca-5714-bc2e-c0fbfb96eb6a", 00:14:14.760 "is_configured": true, 00:14:14.760 "data_offset": 2048, 00:14:14.760 "data_size": 63488 00:14:14.760 } 00:14:14.760 ] 00:14:14.760 }' 00:14:14.760 10:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.760 10:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:14.760 10:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.760 10:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:14.760 10:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:14.760 10:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:14:14.760 10:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:14.760 10:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:14.760 10:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:14.760 10:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:14.760 10:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:14.760 10:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:14.760 10:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.760 10:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.760 [2024-11-19 10:25:28.394105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:14.760 [2024-11-19 10:25:28.394319] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:14.760 [2024-11-19 10:25:28.394379] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:14.760 request: 00:14:14.760 { 00:14:14.760 "base_bdev": "BaseBdev1", 00:14:14.760 "raid_bdev": "raid_bdev1", 00:14:14.760 "method": "bdev_raid_add_base_bdev", 00:14:14.761 "req_id": 1 00:14:14.761 } 00:14:14.761 Got JSON-RPC error response 00:14:14.761 response: 00:14:14.761 { 00:14:14.761 "code": -22, 00:14:14.761 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:14.761 } 00:14:14.761 10:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:14.761 10:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:14:14.761 10:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:14.761 10:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:14.761 10:25:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:14.761 10:25:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:15.696 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:15.696 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.696 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.696 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.696 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.696 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:15.696 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.696 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.696 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.696 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.696 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.696 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.696 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.696 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.696 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.696 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.696 "name": "raid_bdev1", 00:14:15.696 "uuid": "e7c44914-a18c-4e4a-932e-32c026df12d7", 00:14:15.696 "strip_size_kb": 0, 00:14:15.696 "state": "online", 00:14:15.696 "raid_level": "raid1", 00:14:15.696 "superblock": true, 00:14:15.696 "num_base_bdevs": 4, 00:14:15.696 "num_base_bdevs_discovered": 2, 00:14:15.696 "num_base_bdevs_operational": 2, 00:14:15.696 "base_bdevs_list": [ 00:14:15.696 { 00:14:15.696 "name": null, 00:14:15.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.696 "is_configured": false, 00:14:15.696 "data_offset": 0, 00:14:15.696 "data_size": 63488 00:14:15.696 }, 00:14:15.696 { 00:14:15.696 "name": null, 00:14:15.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.696 "is_configured": false, 00:14:15.696 "data_offset": 2048, 00:14:15.696 "data_size": 63488 00:14:15.696 }, 00:14:15.696 { 00:14:15.696 "name": "BaseBdev3", 00:14:15.696 "uuid": "1af145d7-bef6-50b9-b558-98f62ca59dda", 00:14:15.696 "is_configured": true, 00:14:15.696 "data_offset": 2048, 00:14:15.696 "data_size": 63488 00:14:15.696 }, 00:14:15.696 { 00:14:15.696 "name": "BaseBdev4", 00:14:15.696 "uuid": "ca767ef6-7cca-5714-bc2e-c0fbfb96eb6a", 00:14:15.696 "is_configured": true, 00:14:15.696 "data_offset": 2048, 00:14:15.696 "data_size": 63488 00:14:15.696 } 00:14:15.696 ] 00:14:15.696 }' 00:14:15.696 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.696 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.263 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:16.263 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.263 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:16.263 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:16.263 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.263 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.263 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.263 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.263 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.263 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.263 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.263 "name": "raid_bdev1", 00:14:16.263 "uuid": "e7c44914-a18c-4e4a-932e-32c026df12d7", 00:14:16.263 "strip_size_kb": 0, 00:14:16.263 "state": "online", 00:14:16.263 "raid_level": "raid1", 00:14:16.263 "superblock": true, 00:14:16.263 "num_base_bdevs": 4, 00:14:16.263 "num_base_bdevs_discovered": 2, 00:14:16.263 "num_base_bdevs_operational": 2, 00:14:16.263 "base_bdevs_list": [ 00:14:16.263 { 00:14:16.263 "name": null, 00:14:16.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.263 "is_configured": false, 00:14:16.263 "data_offset": 0, 00:14:16.263 "data_size": 63488 00:14:16.263 }, 00:14:16.263 { 00:14:16.263 "name": null, 00:14:16.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.263 "is_configured": false, 00:14:16.263 "data_offset": 2048, 00:14:16.263 "data_size": 63488 00:14:16.263 }, 00:14:16.263 { 00:14:16.263 "name": "BaseBdev3", 00:14:16.263 "uuid": "1af145d7-bef6-50b9-b558-98f62ca59dda", 00:14:16.263 "is_configured": true, 00:14:16.263 "data_offset": 2048, 00:14:16.263 "data_size": 63488 00:14:16.263 }, 00:14:16.263 { 00:14:16.263 "name": "BaseBdev4", 00:14:16.263 "uuid": "ca767ef6-7cca-5714-bc2e-c0fbfb96eb6a", 00:14:16.263 "is_configured": true, 00:14:16.263 "data_offset": 2048, 00:14:16.263 "data_size": 63488 00:14:16.263 } 00:14:16.263 ] 00:14:16.263 }' 00:14:16.263 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.263 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:16.263 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.263 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:16.263 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 78878 00:14:16.263 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 78878 ']' 00:14:16.263 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 78878 00:14:16.263 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:16.263 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:16.263 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78878 00:14:16.263 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:16.263 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:16.263 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78878' 00:14:16.263 killing process with pid 78878 00:14:16.263 Received shutdown signal, test time was about 17.717009 seconds 00:14:16.263 00:14:16.263 Latency(us) 00:14:16.263 [2024-11-19T10:25:30.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.263 [2024-11-19T10:25:30.044Z] =================================================================================================================== 00:14:16.263 [2024-11-19T10:25:30.044Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:16.263 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 78878 00:14:16.263 [2024-11-19 10:25:29.977294] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:16.263 [2024-11-19 10:25:29.977417] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:16.264 [2024-11-19 10:25:29.977484] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:16.264 10:25:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 78878 00:14:16.264 [2024-11-19 10:25:29.977495] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:16.832 [2024-11-19 10:25:30.373704] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:17.766 10:25:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:17.767 00:14:17.767 real 0m20.997s 00:14:17.767 user 0m27.330s 00:14:17.767 sys 0m2.504s 00:14:17.767 ************************************ 00:14:17.767 END TEST raid_rebuild_test_sb_io 00:14:17.767 ************************************ 00:14:17.767 10:25:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:17.767 10:25:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.767 10:25:31 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:17.767 10:25:31 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:17.767 10:25:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:17.767 10:25:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:17.767 10:25:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:17.767 ************************************ 00:14:17.767 START TEST raid5f_state_function_test 00:14:17.767 ************************************ 00:14:17.767 10:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:14:17.767 10:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:17.767 10:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:17.767 10:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:17.767 10:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:17.767 10:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:17.767 10:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:17.767 10:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:17.767 10:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:17.767 10:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:17.767 10:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:17.767 10:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:17.767 10:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:17.767 10:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:17.767 10:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:17.767 10:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:17.767 10:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:18.026 10:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:18.026 10:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:18.026 10:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:18.026 10:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:18.026 10:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:18.026 10:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:18.026 10:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:18.026 10:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:18.026 10:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:18.026 10:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:18.026 10:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79594 00:14:18.026 10:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:18.026 10:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79594' 00:14:18.026 Process raid pid: 79594 00:14:18.026 10:25:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79594 00:14:18.026 10:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 79594 ']' 00:14:18.026 10:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.026 10:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:18.026 10:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.026 10:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:18.026 10:25:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.026 [2024-11-19 10:25:31.632154] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:14:18.026 [2024-11-19 10:25:31.632347] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.026 [2024-11-19 10:25:31.800346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.285 [2024-11-19 10:25:31.905520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.544 [2024-11-19 10:25:32.101233] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:18.544 [2024-11-19 10:25:32.101260] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:18.804 10:25:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:18.804 10:25:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:18.804 10:25:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:18.804 10:25:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.804 10:25:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.804 [2024-11-19 10:25:32.463110] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:18.804 [2024-11-19 10:25:32.463160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:18.804 [2024-11-19 10:25:32.463171] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:18.804 [2024-11-19 10:25:32.463180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:18.804 [2024-11-19 10:25:32.463190] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:18.804 [2024-11-19 10:25:32.463199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:18.804 10:25:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.804 10:25:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:18.804 10:25:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.804 10:25:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:18.804 10:25:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:18.804 10:25:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.804 10:25:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:18.804 10:25:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.804 10:25:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.804 10:25:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.804 10:25:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.804 10:25:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.804 10:25:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.804 10:25:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.804 10:25:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.804 10:25:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.804 10:25:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.804 "name": "Existed_Raid", 00:14:18.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.804 "strip_size_kb": 64, 00:14:18.804 "state": "configuring", 00:14:18.804 "raid_level": "raid5f", 00:14:18.804 "superblock": false, 00:14:18.804 "num_base_bdevs": 3, 00:14:18.804 "num_base_bdevs_discovered": 0, 00:14:18.804 "num_base_bdevs_operational": 3, 00:14:18.804 "base_bdevs_list": [ 00:14:18.804 { 00:14:18.804 "name": "BaseBdev1", 00:14:18.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.804 "is_configured": false, 00:14:18.804 "data_offset": 0, 00:14:18.804 "data_size": 0 00:14:18.804 }, 00:14:18.804 { 00:14:18.804 "name": "BaseBdev2", 00:14:18.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.804 "is_configured": false, 00:14:18.804 "data_offset": 0, 00:14:18.804 "data_size": 0 00:14:18.804 }, 00:14:18.804 { 00:14:18.804 "name": "BaseBdev3", 00:14:18.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.804 "is_configured": false, 00:14:18.804 "data_offset": 0, 00:14:18.804 "data_size": 0 00:14:18.804 } 00:14:18.804 ] 00:14:18.804 }' 00:14:18.804 10:25:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.804 10:25:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.372 10:25:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:19.372 10:25:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.372 10:25:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.372 [2024-11-19 10:25:32.942194] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:19.372 [2024-11-19 10:25:32.942272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:19.372 10:25:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.372 10:25:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:19.372 10:25:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.372 10:25:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.372 [2024-11-19 10:25:32.950191] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:19.372 [2024-11-19 10:25:32.950271] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:19.372 [2024-11-19 10:25:32.950318] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:19.372 [2024-11-19 10:25:32.950341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:19.372 [2024-11-19 10:25:32.950377] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:19.372 [2024-11-19 10:25:32.950409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:19.372 10:25:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.372 10:25:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:19.372 10:25:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.372 10:25:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.372 [2024-11-19 10:25:32.994450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:19.372 BaseBdev1 00:14:19.372 10:25:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.372 10:25:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:19.372 10:25:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:19.372 10:25:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:19.372 10:25:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:19.372 10:25:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:19.372 10:25:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:19.372 10:25:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:19.372 10:25:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.372 10:25:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.372 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.372 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:19.372 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.372 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.372 [ 00:14:19.372 { 00:14:19.372 "name": "BaseBdev1", 00:14:19.372 "aliases": [ 00:14:19.372 "909a5fdc-ab5d-430c-9fad-807ddc8f85f0" 00:14:19.372 ], 00:14:19.372 "product_name": "Malloc disk", 00:14:19.372 "block_size": 512, 00:14:19.372 "num_blocks": 65536, 00:14:19.372 "uuid": "909a5fdc-ab5d-430c-9fad-807ddc8f85f0", 00:14:19.372 "assigned_rate_limits": { 00:14:19.372 "rw_ios_per_sec": 0, 00:14:19.372 "rw_mbytes_per_sec": 0, 00:14:19.372 "r_mbytes_per_sec": 0, 00:14:19.372 "w_mbytes_per_sec": 0 00:14:19.372 }, 00:14:19.372 "claimed": true, 00:14:19.372 "claim_type": "exclusive_write", 00:14:19.372 "zoned": false, 00:14:19.372 "supported_io_types": { 00:14:19.372 "read": true, 00:14:19.372 "write": true, 00:14:19.372 "unmap": true, 00:14:19.372 "flush": true, 00:14:19.372 "reset": true, 00:14:19.372 "nvme_admin": false, 00:14:19.372 "nvme_io": false, 00:14:19.372 "nvme_io_md": false, 00:14:19.372 "write_zeroes": true, 00:14:19.372 "zcopy": true, 00:14:19.372 "get_zone_info": false, 00:14:19.372 "zone_management": false, 00:14:19.372 "zone_append": false, 00:14:19.372 "compare": false, 00:14:19.372 "compare_and_write": false, 00:14:19.372 "abort": true, 00:14:19.372 "seek_hole": false, 00:14:19.372 "seek_data": false, 00:14:19.372 "copy": true, 00:14:19.372 "nvme_iov_md": false 00:14:19.372 }, 00:14:19.372 "memory_domains": [ 00:14:19.372 { 00:14:19.372 "dma_device_id": "system", 00:14:19.372 "dma_device_type": 1 00:14:19.372 }, 00:14:19.372 { 00:14:19.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.372 "dma_device_type": 2 00:14:19.372 } 00:14:19.372 ], 00:14:19.372 "driver_specific": {} 00:14:19.372 } 00:14:19.372 ] 00:14:19.372 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.372 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:19.372 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:19.372 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.372 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.372 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:19.372 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.372 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.372 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.372 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.372 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.372 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.372 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.372 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.372 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.372 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.372 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.372 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.372 "name": "Existed_Raid", 00:14:19.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.372 "strip_size_kb": 64, 00:14:19.372 "state": "configuring", 00:14:19.372 "raid_level": "raid5f", 00:14:19.372 "superblock": false, 00:14:19.372 "num_base_bdevs": 3, 00:14:19.372 "num_base_bdevs_discovered": 1, 00:14:19.372 "num_base_bdevs_operational": 3, 00:14:19.372 "base_bdevs_list": [ 00:14:19.372 { 00:14:19.372 "name": "BaseBdev1", 00:14:19.372 "uuid": "909a5fdc-ab5d-430c-9fad-807ddc8f85f0", 00:14:19.372 "is_configured": true, 00:14:19.372 "data_offset": 0, 00:14:19.372 "data_size": 65536 00:14:19.372 }, 00:14:19.372 { 00:14:19.372 "name": "BaseBdev2", 00:14:19.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.372 "is_configured": false, 00:14:19.372 "data_offset": 0, 00:14:19.372 "data_size": 0 00:14:19.372 }, 00:14:19.372 { 00:14:19.372 "name": "BaseBdev3", 00:14:19.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.372 "is_configured": false, 00:14:19.372 "data_offset": 0, 00:14:19.373 "data_size": 0 00:14:19.373 } 00:14:19.373 ] 00:14:19.373 }' 00:14:19.373 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.373 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.941 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:19.941 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.941 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.941 [2024-11-19 10:25:33.477679] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:19.941 [2024-11-19 10:25:33.477726] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:19.941 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.941 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:19.941 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.941 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.941 [2024-11-19 10:25:33.489706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:19.941 [2024-11-19 10:25:33.491441] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:19.941 [2024-11-19 10:25:33.491482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:19.941 [2024-11-19 10:25:33.491492] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:19.941 [2024-11-19 10:25:33.491501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:19.941 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.941 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:19.941 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:19.941 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:19.941 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.941 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.941 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:19.941 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.941 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.941 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.941 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.941 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.941 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.942 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.942 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.942 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.942 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.942 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.942 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.942 "name": "Existed_Raid", 00:14:19.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.942 "strip_size_kb": 64, 00:14:19.942 "state": "configuring", 00:14:19.942 "raid_level": "raid5f", 00:14:19.942 "superblock": false, 00:14:19.942 "num_base_bdevs": 3, 00:14:19.942 "num_base_bdevs_discovered": 1, 00:14:19.942 "num_base_bdevs_operational": 3, 00:14:19.942 "base_bdevs_list": [ 00:14:19.942 { 00:14:19.942 "name": "BaseBdev1", 00:14:19.942 "uuid": "909a5fdc-ab5d-430c-9fad-807ddc8f85f0", 00:14:19.942 "is_configured": true, 00:14:19.942 "data_offset": 0, 00:14:19.942 "data_size": 65536 00:14:19.942 }, 00:14:19.942 { 00:14:19.942 "name": "BaseBdev2", 00:14:19.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.942 "is_configured": false, 00:14:19.942 "data_offset": 0, 00:14:19.942 "data_size": 0 00:14:19.942 }, 00:14:19.942 { 00:14:19.942 "name": "BaseBdev3", 00:14:19.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.942 "is_configured": false, 00:14:19.942 "data_offset": 0, 00:14:19.942 "data_size": 0 00:14:19.942 } 00:14:19.942 ] 00:14:19.942 }' 00:14:19.942 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.942 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.202 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:20.202 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.202 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.202 [2024-11-19 10:25:33.962987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:20.202 BaseBdev2 00:14:20.202 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.202 10:25:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:20.202 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:20.202 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:20.202 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:20.202 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:20.202 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:20.202 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:20.202 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.202 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.462 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.462 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:20.462 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.462 10:25:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.462 [ 00:14:20.462 { 00:14:20.462 "name": "BaseBdev2", 00:14:20.462 "aliases": [ 00:14:20.462 "48d41fb5-28ef-4dc0-a220-8a77afcaac5d" 00:14:20.462 ], 00:14:20.462 "product_name": "Malloc disk", 00:14:20.462 "block_size": 512, 00:14:20.462 "num_blocks": 65536, 00:14:20.462 "uuid": "48d41fb5-28ef-4dc0-a220-8a77afcaac5d", 00:14:20.462 "assigned_rate_limits": { 00:14:20.462 "rw_ios_per_sec": 0, 00:14:20.462 "rw_mbytes_per_sec": 0, 00:14:20.462 "r_mbytes_per_sec": 0, 00:14:20.462 "w_mbytes_per_sec": 0 00:14:20.462 }, 00:14:20.462 "claimed": true, 00:14:20.462 "claim_type": "exclusive_write", 00:14:20.462 "zoned": false, 00:14:20.462 "supported_io_types": { 00:14:20.462 "read": true, 00:14:20.462 "write": true, 00:14:20.462 "unmap": true, 00:14:20.462 "flush": true, 00:14:20.462 "reset": true, 00:14:20.462 "nvme_admin": false, 00:14:20.462 "nvme_io": false, 00:14:20.462 "nvme_io_md": false, 00:14:20.462 "write_zeroes": true, 00:14:20.462 "zcopy": true, 00:14:20.462 "get_zone_info": false, 00:14:20.462 "zone_management": false, 00:14:20.462 "zone_append": false, 00:14:20.462 "compare": false, 00:14:20.462 "compare_and_write": false, 00:14:20.462 "abort": true, 00:14:20.462 "seek_hole": false, 00:14:20.462 "seek_data": false, 00:14:20.462 "copy": true, 00:14:20.462 "nvme_iov_md": false 00:14:20.462 }, 00:14:20.462 "memory_domains": [ 00:14:20.462 { 00:14:20.462 "dma_device_id": "system", 00:14:20.462 "dma_device_type": 1 00:14:20.462 }, 00:14:20.462 { 00:14:20.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.462 "dma_device_type": 2 00:14:20.462 } 00:14:20.462 ], 00:14:20.462 "driver_specific": {} 00:14:20.462 } 00:14:20.462 ] 00:14:20.462 10:25:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.462 10:25:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:20.462 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:20.462 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:20.462 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:20.462 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.462 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.462 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:20.462 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.462 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.462 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.462 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.462 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.462 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.462 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.462 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.462 10:25:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.462 10:25:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.462 10:25:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.462 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.462 "name": "Existed_Raid", 00:14:20.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.462 "strip_size_kb": 64, 00:14:20.462 "state": "configuring", 00:14:20.462 "raid_level": "raid5f", 00:14:20.462 "superblock": false, 00:14:20.462 "num_base_bdevs": 3, 00:14:20.462 "num_base_bdevs_discovered": 2, 00:14:20.462 "num_base_bdevs_operational": 3, 00:14:20.462 "base_bdevs_list": [ 00:14:20.462 { 00:14:20.462 "name": "BaseBdev1", 00:14:20.462 "uuid": "909a5fdc-ab5d-430c-9fad-807ddc8f85f0", 00:14:20.462 "is_configured": true, 00:14:20.462 "data_offset": 0, 00:14:20.462 "data_size": 65536 00:14:20.462 }, 00:14:20.462 { 00:14:20.462 "name": "BaseBdev2", 00:14:20.462 "uuid": "48d41fb5-28ef-4dc0-a220-8a77afcaac5d", 00:14:20.462 "is_configured": true, 00:14:20.462 "data_offset": 0, 00:14:20.462 "data_size": 65536 00:14:20.462 }, 00:14:20.462 { 00:14:20.462 "name": "BaseBdev3", 00:14:20.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.462 "is_configured": false, 00:14:20.462 "data_offset": 0, 00:14:20.462 "data_size": 0 00:14:20.462 } 00:14:20.462 ] 00:14:20.462 }' 00:14:20.462 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.462 10:25:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.721 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:20.721 10:25:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.721 10:25:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.721 [2024-11-19 10:25:34.462080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:20.721 [2024-11-19 10:25:34.462145] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:20.721 [2024-11-19 10:25:34.462159] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:20.721 [2024-11-19 10:25:34.462445] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:20.721 [2024-11-19 10:25:34.467419] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:20.721 [2024-11-19 10:25:34.467440] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:20.721 [2024-11-19 10:25:34.467726] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.721 BaseBdev3 00:14:20.721 10:25:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.721 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:20.721 10:25:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:20.721 10:25:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:20.721 10:25:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:20.721 10:25:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:20.721 10:25:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:20.721 10:25:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:20.721 10:25:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.721 10:25:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.721 10:25:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.721 10:25:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:20.721 10:25:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.721 10:25:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.721 [ 00:14:20.721 { 00:14:20.721 "name": "BaseBdev3", 00:14:20.721 "aliases": [ 00:14:20.721 "1ce79a5d-9d99-4c43-bdb9-aeac09a71195" 00:14:20.721 ], 00:14:20.721 "product_name": "Malloc disk", 00:14:20.721 "block_size": 512, 00:14:20.721 "num_blocks": 65536, 00:14:20.721 "uuid": "1ce79a5d-9d99-4c43-bdb9-aeac09a71195", 00:14:20.721 "assigned_rate_limits": { 00:14:20.721 "rw_ios_per_sec": 0, 00:14:20.721 "rw_mbytes_per_sec": 0, 00:14:20.721 "r_mbytes_per_sec": 0, 00:14:20.721 "w_mbytes_per_sec": 0 00:14:20.721 }, 00:14:20.721 "claimed": true, 00:14:20.721 "claim_type": "exclusive_write", 00:14:20.980 "zoned": false, 00:14:20.980 "supported_io_types": { 00:14:20.980 "read": true, 00:14:20.980 "write": true, 00:14:20.980 "unmap": true, 00:14:20.980 "flush": true, 00:14:20.980 "reset": true, 00:14:20.980 "nvme_admin": false, 00:14:20.980 "nvme_io": false, 00:14:20.980 "nvme_io_md": false, 00:14:20.980 "write_zeroes": true, 00:14:20.980 "zcopy": true, 00:14:20.980 "get_zone_info": false, 00:14:20.980 "zone_management": false, 00:14:20.980 "zone_append": false, 00:14:20.980 "compare": false, 00:14:20.980 "compare_and_write": false, 00:14:20.980 "abort": true, 00:14:20.980 "seek_hole": false, 00:14:20.980 "seek_data": false, 00:14:20.980 "copy": true, 00:14:20.980 "nvme_iov_md": false 00:14:20.980 }, 00:14:20.980 "memory_domains": [ 00:14:20.980 { 00:14:20.980 "dma_device_id": "system", 00:14:20.980 "dma_device_type": 1 00:14:20.980 }, 00:14:20.980 { 00:14:20.980 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.980 "dma_device_type": 2 00:14:20.980 } 00:14:20.980 ], 00:14:20.980 "driver_specific": {} 00:14:20.980 } 00:14:20.980 ] 00:14:20.980 10:25:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.980 10:25:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:20.980 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:20.980 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:20.980 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:20.980 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.980 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.980 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:20.980 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.980 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.980 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.980 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.980 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.980 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.980 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.980 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.980 10:25:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.980 10:25:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.980 10:25:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.980 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.980 "name": "Existed_Raid", 00:14:20.980 "uuid": "68d63cb4-e741-4c22-9c28-31ca47683785", 00:14:20.980 "strip_size_kb": 64, 00:14:20.980 "state": "online", 00:14:20.980 "raid_level": "raid5f", 00:14:20.980 "superblock": false, 00:14:20.980 "num_base_bdevs": 3, 00:14:20.980 "num_base_bdevs_discovered": 3, 00:14:20.980 "num_base_bdevs_operational": 3, 00:14:20.980 "base_bdevs_list": [ 00:14:20.980 { 00:14:20.980 "name": "BaseBdev1", 00:14:20.980 "uuid": "909a5fdc-ab5d-430c-9fad-807ddc8f85f0", 00:14:20.980 "is_configured": true, 00:14:20.980 "data_offset": 0, 00:14:20.980 "data_size": 65536 00:14:20.980 }, 00:14:20.980 { 00:14:20.980 "name": "BaseBdev2", 00:14:20.980 "uuid": "48d41fb5-28ef-4dc0-a220-8a77afcaac5d", 00:14:20.980 "is_configured": true, 00:14:20.980 "data_offset": 0, 00:14:20.980 "data_size": 65536 00:14:20.980 }, 00:14:20.980 { 00:14:20.980 "name": "BaseBdev3", 00:14:20.980 "uuid": "1ce79a5d-9d99-4c43-bdb9-aeac09a71195", 00:14:20.980 "is_configured": true, 00:14:20.980 "data_offset": 0, 00:14:20.980 "data_size": 65536 00:14:20.981 } 00:14:20.981 ] 00:14:20.981 }' 00:14:20.981 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.981 10:25:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.239 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:21.239 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:21.239 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:21.239 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:21.239 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:21.239 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:21.239 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:21.239 10:25:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:21.239 10:25:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.239 10:25:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.239 [2024-11-19 10:25:34.969290] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:21.239 10:25:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.239 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:21.239 "name": "Existed_Raid", 00:14:21.239 "aliases": [ 00:14:21.239 "68d63cb4-e741-4c22-9c28-31ca47683785" 00:14:21.239 ], 00:14:21.239 "product_name": "Raid Volume", 00:14:21.239 "block_size": 512, 00:14:21.239 "num_blocks": 131072, 00:14:21.239 "uuid": "68d63cb4-e741-4c22-9c28-31ca47683785", 00:14:21.239 "assigned_rate_limits": { 00:14:21.239 "rw_ios_per_sec": 0, 00:14:21.239 "rw_mbytes_per_sec": 0, 00:14:21.239 "r_mbytes_per_sec": 0, 00:14:21.239 "w_mbytes_per_sec": 0 00:14:21.239 }, 00:14:21.239 "claimed": false, 00:14:21.239 "zoned": false, 00:14:21.239 "supported_io_types": { 00:14:21.239 "read": true, 00:14:21.239 "write": true, 00:14:21.239 "unmap": false, 00:14:21.239 "flush": false, 00:14:21.239 "reset": true, 00:14:21.240 "nvme_admin": false, 00:14:21.240 "nvme_io": false, 00:14:21.240 "nvme_io_md": false, 00:14:21.240 "write_zeroes": true, 00:14:21.240 "zcopy": false, 00:14:21.240 "get_zone_info": false, 00:14:21.240 "zone_management": false, 00:14:21.240 "zone_append": false, 00:14:21.240 "compare": false, 00:14:21.240 "compare_and_write": false, 00:14:21.240 "abort": false, 00:14:21.240 "seek_hole": false, 00:14:21.240 "seek_data": false, 00:14:21.240 "copy": false, 00:14:21.240 "nvme_iov_md": false 00:14:21.240 }, 00:14:21.240 "driver_specific": { 00:14:21.240 "raid": { 00:14:21.240 "uuid": "68d63cb4-e741-4c22-9c28-31ca47683785", 00:14:21.240 "strip_size_kb": 64, 00:14:21.240 "state": "online", 00:14:21.240 "raid_level": "raid5f", 00:14:21.240 "superblock": false, 00:14:21.240 "num_base_bdevs": 3, 00:14:21.240 "num_base_bdevs_discovered": 3, 00:14:21.240 "num_base_bdevs_operational": 3, 00:14:21.240 "base_bdevs_list": [ 00:14:21.240 { 00:14:21.240 "name": "BaseBdev1", 00:14:21.240 "uuid": "909a5fdc-ab5d-430c-9fad-807ddc8f85f0", 00:14:21.240 "is_configured": true, 00:14:21.240 "data_offset": 0, 00:14:21.240 "data_size": 65536 00:14:21.240 }, 00:14:21.240 { 00:14:21.240 "name": "BaseBdev2", 00:14:21.240 "uuid": "48d41fb5-28ef-4dc0-a220-8a77afcaac5d", 00:14:21.240 "is_configured": true, 00:14:21.240 "data_offset": 0, 00:14:21.240 "data_size": 65536 00:14:21.240 }, 00:14:21.240 { 00:14:21.240 "name": "BaseBdev3", 00:14:21.240 "uuid": "1ce79a5d-9d99-4c43-bdb9-aeac09a71195", 00:14:21.240 "is_configured": true, 00:14:21.240 "data_offset": 0, 00:14:21.240 "data_size": 65536 00:14:21.240 } 00:14:21.240 ] 00:14:21.240 } 00:14:21.240 } 00:14:21.240 }' 00:14:21.240 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:21.498 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:21.498 BaseBdev2 00:14:21.498 BaseBdev3' 00:14:21.498 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.498 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:21.498 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.498 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:21.498 10:25:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.498 10:25:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.498 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.498 10:25:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.498 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.498 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.498 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.498 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.498 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:21.498 10:25:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.498 10:25:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.498 10:25:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.498 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.498 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.498 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.498 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:21.498 10:25:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.498 10:25:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.498 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.498 10:25:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.498 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.498 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.498 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:21.498 10:25:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.498 10:25:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.498 [2024-11-19 10:25:35.260630] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:21.756 10:25:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.756 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:21.756 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:21.756 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:21.756 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:21.756 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:21.756 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:21.756 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.756 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.756 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:21.756 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.756 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:21.756 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.756 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.756 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.756 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.756 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.756 10:25:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.756 10:25:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:21.756 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.756 10:25:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.756 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.756 "name": "Existed_Raid", 00:14:21.756 "uuid": "68d63cb4-e741-4c22-9c28-31ca47683785", 00:14:21.756 "strip_size_kb": 64, 00:14:21.756 "state": "online", 00:14:21.756 "raid_level": "raid5f", 00:14:21.756 "superblock": false, 00:14:21.756 "num_base_bdevs": 3, 00:14:21.756 "num_base_bdevs_discovered": 2, 00:14:21.756 "num_base_bdevs_operational": 2, 00:14:21.756 "base_bdevs_list": [ 00:14:21.756 { 00:14:21.756 "name": null, 00:14:21.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.756 "is_configured": false, 00:14:21.756 "data_offset": 0, 00:14:21.756 "data_size": 65536 00:14:21.756 }, 00:14:21.756 { 00:14:21.756 "name": "BaseBdev2", 00:14:21.756 "uuid": "48d41fb5-28ef-4dc0-a220-8a77afcaac5d", 00:14:21.756 "is_configured": true, 00:14:21.756 "data_offset": 0, 00:14:21.756 "data_size": 65536 00:14:21.756 }, 00:14:21.756 { 00:14:21.756 "name": "BaseBdev3", 00:14:21.756 "uuid": "1ce79a5d-9d99-4c43-bdb9-aeac09a71195", 00:14:21.756 "is_configured": true, 00:14:21.756 "data_offset": 0, 00:14:21.756 "data_size": 65536 00:14:21.756 } 00:14:21.756 ] 00:14:21.756 }' 00:14:21.757 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.757 10:25:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.322 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:22.322 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:22.322 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:22.322 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.322 10:25:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.322 10:25:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.322 10:25:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.322 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:22.322 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:22.322 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:22.322 10:25:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.322 10:25:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.322 [2024-11-19 10:25:35.843919] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:22.322 [2024-11-19 10:25:35.844072] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:22.322 [2024-11-19 10:25:35.931943] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:22.322 10:25:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.322 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:22.322 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:22.322 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.322 10:25:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.322 10:25:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.322 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:22.322 10:25:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.322 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:22.322 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:22.322 10:25:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:22.322 10:25:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.322 10:25:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.322 [2024-11-19 10:25:35.991865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:22.322 [2024-11-19 10:25:35.991913] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:22.322 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.322 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:22.322 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:22.322 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:22.322 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.322 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.322 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.322 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.587 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:22.587 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:22.587 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:22.587 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:22.587 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:22.587 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:22.587 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.587 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.587 BaseBdev2 00:14:22.587 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.587 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:22.587 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:22.587 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:22.587 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:22.587 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:22.587 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:22.587 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:22.587 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.587 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.587 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.587 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:22.587 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.587 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.587 [ 00:14:22.587 { 00:14:22.587 "name": "BaseBdev2", 00:14:22.587 "aliases": [ 00:14:22.587 "26ee17be-e4f2-4942-a6d0-6cb989a53b2d" 00:14:22.587 ], 00:14:22.587 "product_name": "Malloc disk", 00:14:22.587 "block_size": 512, 00:14:22.587 "num_blocks": 65536, 00:14:22.587 "uuid": "26ee17be-e4f2-4942-a6d0-6cb989a53b2d", 00:14:22.587 "assigned_rate_limits": { 00:14:22.587 "rw_ios_per_sec": 0, 00:14:22.587 "rw_mbytes_per_sec": 0, 00:14:22.587 "r_mbytes_per_sec": 0, 00:14:22.587 "w_mbytes_per_sec": 0 00:14:22.587 }, 00:14:22.587 "claimed": false, 00:14:22.587 "zoned": false, 00:14:22.587 "supported_io_types": { 00:14:22.587 "read": true, 00:14:22.587 "write": true, 00:14:22.587 "unmap": true, 00:14:22.587 "flush": true, 00:14:22.587 "reset": true, 00:14:22.587 "nvme_admin": false, 00:14:22.587 "nvme_io": false, 00:14:22.587 "nvme_io_md": false, 00:14:22.587 "write_zeroes": true, 00:14:22.587 "zcopy": true, 00:14:22.587 "get_zone_info": false, 00:14:22.587 "zone_management": false, 00:14:22.587 "zone_append": false, 00:14:22.587 "compare": false, 00:14:22.587 "compare_and_write": false, 00:14:22.587 "abort": true, 00:14:22.587 "seek_hole": false, 00:14:22.587 "seek_data": false, 00:14:22.588 "copy": true, 00:14:22.588 "nvme_iov_md": false 00:14:22.588 }, 00:14:22.588 "memory_domains": [ 00:14:22.588 { 00:14:22.588 "dma_device_id": "system", 00:14:22.588 "dma_device_type": 1 00:14:22.588 }, 00:14:22.588 { 00:14:22.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.588 "dma_device_type": 2 00:14:22.588 } 00:14:22.588 ], 00:14:22.588 "driver_specific": {} 00:14:22.588 } 00:14:22.588 ] 00:14:22.588 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.588 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:22.588 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:22.588 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:22.588 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:22.588 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.588 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.588 BaseBdev3 00:14:22.588 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.588 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:22.588 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:22.588 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:22.588 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:22.588 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:22.588 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:22.588 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:22.588 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.588 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.588 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.588 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:22.588 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.588 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.588 [ 00:14:22.588 { 00:14:22.588 "name": "BaseBdev3", 00:14:22.588 "aliases": [ 00:14:22.588 "fde489e0-7825-48ac-857b-baddc03c7b18" 00:14:22.588 ], 00:14:22.588 "product_name": "Malloc disk", 00:14:22.588 "block_size": 512, 00:14:22.588 "num_blocks": 65536, 00:14:22.588 "uuid": "fde489e0-7825-48ac-857b-baddc03c7b18", 00:14:22.588 "assigned_rate_limits": { 00:14:22.588 "rw_ios_per_sec": 0, 00:14:22.588 "rw_mbytes_per_sec": 0, 00:14:22.588 "r_mbytes_per_sec": 0, 00:14:22.588 "w_mbytes_per_sec": 0 00:14:22.588 }, 00:14:22.588 "claimed": false, 00:14:22.588 "zoned": false, 00:14:22.588 "supported_io_types": { 00:14:22.588 "read": true, 00:14:22.588 "write": true, 00:14:22.588 "unmap": true, 00:14:22.588 "flush": true, 00:14:22.588 "reset": true, 00:14:22.588 "nvme_admin": false, 00:14:22.588 "nvme_io": false, 00:14:22.588 "nvme_io_md": false, 00:14:22.588 "write_zeroes": true, 00:14:22.588 "zcopy": true, 00:14:22.588 "get_zone_info": false, 00:14:22.588 "zone_management": false, 00:14:22.588 "zone_append": false, 00:14:22.588 "compare": false, 00:14:22.588 "compare_and_write": false, 00:14:22.588 "abort": true, 00:14:22.588 "seek_hole": false, 00:14:22.588 "seek_data": false, 00:14:22.588 "copy": true, 00:14:22.588 "nvme_iov_md": false 00:14:22.588 }, 00:14:22.588 "memory_domains": [ 00:14:22.588 { 00:14:22.588 "dma_device_id": "system", 00:14:22.588 "dma_device_type": 1 00:14:22.588 }, 00:14:22.588 { 00:14:22.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.588 "dma_device_type": 2 00:14:22.588 } 00:14:22.588 ], 00:14:22.588 "driver_specific": {} 00:14:22.588 } 00:14:22.588 ] 00:14:22.588 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.588 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:22.588 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:22.588 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:22.588 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:22.588 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.588 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.589 [2024-11-19 10:25:36.301828] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:22.589 [2024-11-19 10:25:36.301915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:22.589 [2024-11-19 10:25:36.301971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:22.589 [2024-11-19 10:25:36.303689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:22.589 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.589 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:22.589 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.589 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.589 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:22.589 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.589 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:22.589 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.589 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.589 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.589 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.589 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.589 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.589 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.589 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.589 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.589 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.589 "name": "Existed_Raid", 00:14:22.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.589 "strip_size_kb": 64, 00:14:22.589 "state": "configuring", 00:14:22.589 "raid_level": "raid5f", 00:14:22.589 "superblock": false, 00:14:22.589 "num_base_bdevs": 3, 00:14:22.589 "num_base_bdevs_discovered": 2, 00:14:22.589 "num_base_bdevs_operational": 3, 00:14:22.589 "base_bdevs_list": [ 00:14:22.589 { 00:14:22.589 "name": "BaseBdev1", 00:14:22.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.589 "is_configured": false, 00:14:22.589 "data_offset": 0, 00:14:22.589 "data_size": 0 00:14:22.589 }, 00:14:22.589 { 00:14:22.589 "name": "BaseBdev2", 00:14:22.589 "uuid": "26ee17be-e4f2-4942-a6d0-6cb989a53b2d", 00:14:22.589 "is_configured": true, 00:14:22.589 "data_offset": 0, 00:14:22.589 "data_size": 65536 00:14:22.589 }, 00:14:22.589 { 00:14:22.589 "name": "BaseBdev3", 00:14:22.589 "uuid": "fde489e0-7825-48ac-857b-baddc03c7b18", 00:14:22.589 "is_configured": true, 00:14:22.589 "data_offset": 0, 00:14:22.589 "data_size": 65536 00:14:22.589 } 00:14:22.589 ] 00:14:22.589 }' 00:14:22.589 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.589 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.159 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:23.160 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.160 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.160 [2024-11-19 10:25:36.721093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:23.160 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.160 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:23.160 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.160 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.160 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.160 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.160 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.160 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.160 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.160 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.160 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.160 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.160 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.160 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.160 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.160 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.160 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.160 "name": "Existed_Raid", 00:14:23.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.160 "strip_size_kb": 64, 00:14:23.160 "state": "configuring", 00:14:23.160 "raid_level": "raid5f", 00:14:23.160 "superblock": false, 00:14:23.160 "num_base_bdevs": 3, 00:14:23.160 "num_base_bdevs_discovered": 1, 00:14:23.160 "num_base_bdevs_operational": 3, 00:14:23.160 "base_bdevs_list": [ 00:14:23.160 { 00:14:23.160 "name": "BaseBdev1", 00:14:23.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.160 "is_configured": false, 00:14:23.160 "data_offset": 0, 00:14:23.160 "data_size": 0 00:14:23.160 }, 00:14:23.160 { 00:14:23.160 "name": null, 00:14:23.160 "uuid": "26ee17be-e4f2-4942-a6d0-6cb989a53b2d", 00:14:23.160 "is_configured": false, 00:14:23.160 "data_offset": 0, 00:14:23.160 "data_size": 65536 00:14:23.160 }, 00:14:23.160 { 00:14:23.160 "name": "BaseBdev3", 00:14:23.160 "uuid": "fde489e0-7825-48ac-857b-baddc03c7b18", 00:14:23.160 "is_configured": true, 00:14:23.160 "data_offset": 0, 00:14:23.160 "data_size": 65536 00:14:23.160 } 00:14:23.160 ] 00:14:23.160 }' 00:14:23.160 10:25:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.160 10:25:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.419 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.419 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:23.419 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.419 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.419 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.679 [2024-11-19 10:25:37.263290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:23.679 BaseBdev1 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.679 [ 00:14:23.679 { 00:14:23.679 "name": "BaseBdev1", 00:14:23.679 "aliases": [ 00:14:23.679 "092d77fd-45c8-49a0-a690-c2f44b1effdd" 00:14:23.679 ], 00:14:23.679 "product_name": "Malloc disk", 00:14:23.679 "block_size": 512, 00:14:23.679 "num_blocks": 65536, 00:14:23.679 "uuid": "092d77fd-45c8-49a0-a690-c2f44b1effdd", 00:14:23.679 "assigned_rate_limits": { 00:14:23.679 "rw_ios_per_sec": 0, 00:14:23.679 "rw_mbytes_per_sec": 0, 00:14:23.679 "r_mbytes_per_sec": 0, 00:14:23.679 "w_mbytes_per_sec": 0 00:14:23.679 }, 00:14:23.679 "claimed": true, 00:14:23.679 "claim_type": "exclusive_write", 00:14:23.679 "zoned": false, 00:14:23.679 "supported_io_types": { 00:14:23.679 "read": true, 00:14:23.679 "write": true, 00:14:23.679 "unmap": true, 00:14:23.679 "flush": true, 00:14:23.679 "reset": true, 00:14:23.679 "nvme_admin": false, 00:14:23.679 "nvme_io": false, 00:14:23.679 "nvme_io_md": false, 00:14:23.679 "write_zeroes": true, 00:14:23.679 "zcopy": true, 00:14:23.679 "get_zone_info": false, 00:14:23.679 "zone_management": false, 00:14:23.679 "zone_append": false, 00:14:23.679 "compare": false, 00:14:23.679 "compare_and_write": false, 00:14:23.679 "abort": true, 00:14:23.679 "seek_hole": false, 00:14:23.679 "seek_data": false, 00:14:23.679 "copy": true, 00:14:23.679 "nvme_iov_md": false 00:14:23.679 }, 00:14:23.679 "memory_domains": [ 00:14:23.679 { 00:14:23.679 "dma_device_id": "system", 00:14:23.679 "dma_device_type": 1 00:14:23.679 }, 00:14:23.679 { 00:14:23.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.679 "dma_device_type": 2 00:14:23.679 } 00:14:23.679 ], 00:14:23.679 "driver_specific": {} 00:14:23.679 } 00:14:23.679 ] 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.679 "name": "Existed_Raid", 00:14:23.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.679 "strip_size_kb": 64, 00:14:23.679 "state": "configuring", 00:14:23.679 "raid_level": "raid5f", 00:14:23.679 "superblock": false, 00:14:23.679 "num_base_bdevs": 3, 00:14:23.679 "num_base_bdevs_discovered": 2, 00:14:23.679 "num_base_bdevs_operational": 3, 00:14:23.679 "base_bdevs_list": [ 00:14:23.679 { 00:14:23.679 "name": "BaseBdev1", 00:14:23.679 "uuid": "092d77fd-45c8-49a0-a690-c2f44b1effdd", 00:14:23.679 "is_configured": true, 00:14:23.679 "data_offset": 0, 00:14:23.679 "data_size": 65536 00:14:23.679 }, 00:14:23.679 { 00:14:23.679 "name": null, 00:14:23.679 "uuid": "26ee17be-e4f2-4942-a6d0-6cb989a53b2d", 00:14:23.679 "is_configured": false, 00:14:23.679 "data_offset": 0, 00:14:23.679 "data_size": 65536 00:14:23.679 }, 00:14:23.679 { 00:14:23.679 "name": "BaseBdev3", 00:14:23.679 "uuid": "fde489e0-7825-48ac-857b-baddc03c7b18", 00:14:23.679 "is_configured": true, 00:14:23.679 "data_offset": 0, 00:14:23.679 "data_size": 65536 00:14:23.679 } 00:14:23.679 ] 00:14:23.679 }' 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.679 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.249 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.249 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.249 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.249 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:24.249 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.249 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:24.249 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:24.249 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.249 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.249 [2024-11-19 10:25:37.818371] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:24.249 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.249 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:24.249 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.249 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.249 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.249 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.249 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.249 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.249 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.249 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.249 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.249 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.249 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.249 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.249 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.249 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.249 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.249 "name": "Existed_Raid", 00:14:24.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.249 "strip_size_kb": 64, 00:14:24.249 "state": "configuring", 00:14:24.249 "raid_level": "raid5f", 00:14:24.249 "superblock": false, 00:14:24.249 "num_base_bdevs": 3, 00:14:24.249 "num_base_bdevs_discovered": 1, 00:14:24.249 "num_base_bdevs_operational": 3, 00:14:24.249 "base_bdevs_list": [ 00:14:24.249 { 00:14:24.249 "name": "BaseBdev1", 00:14:24.249 "uuid": "092d77fd-45c8-49a0-a690-c2f44b1effdd", 00:14:24.249 "is_configured": true, 00:14:24.249 "data_offset": 0, 00:14:24.249 "data_size": 65536 00:14:24.249 }, 00:14:24.249 { 00:14:24.249 "name": null, 00:14:24.249 "uuid": "26ee17be-e4f2-4942-a6d0-6cb989a53b2d", 00:14:24.249 "is_configured": false, 00:14:24.249 "data_offset": 0, 00:14:24.249 "data_size": 65536 00:14:24.249 }, 00:14:24.249 { 00:14:24.249 "name": null, 00:14:24.249 "uuid": "fde489e0-7825-48ac-857b-baddc03c7b18", 00:14:24.249 "is_configured": false, 00:14:24.249 "data_offset": 0, 00:14:24.249 "data_size": 65536 00:14:24.249 } 00:14:24.249 ] 00:14:24.249 }' 00:14:24.249 10:25:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.249 10:25:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.508 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:24.508 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.508 10:25:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.508 10:25:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.508 10:25:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.508 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:24.508 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:24.508 10:25:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.508 10:25:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.508 [2024-11-19 10:25:38.229688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:24.508 10:25:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.508 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:24.508 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.508 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.508 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.508 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.508 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.508 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.508 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.508 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.508 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.508 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.508 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.508 10:25:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.508 10:25:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.508 10:25:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.508 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.508 "name": "Existed_Raid", 00:14:24.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.508 "strip_size_kb": 64, 00:14:24.508 "state": "configuring", 00:14:24.508 "raid_level": "raid5f", 00:14:24.508 "superblock": false, 00:14:24.508 "num_base_bdevs": 3, 00:14:24.508 "num_base_bdevs_discovered": 2, 00:14:24.508 "num_base_bdevs_operational": 3, 00:14:24.508 "base_bdevs_list": [ 00:14:24.508 { 00:14:24.508 "name": "BaseBdev1", 00:14:24.508 "uuid": "092d77fd-45c8-49a0-a690-c2f44b1effdd", 00:14:24.508 "is_configured": true, 00:14:24.508 "data_offset": 0, 00:14:24.508 "data_size": 65536 00:14:24.508 }, 00:14:24.508 { 00:14:24.508 "name": null, 00:14:24.509 "uuid": "26ee17be-e4f2-4942-a6d0-6cb989a53b2d", 00:14:24.509 "is_configured": false, 00:14:24.509 "data_offset": 0, 00:14:24.509 "data_size": 65536 00:14:24.509 }, 00:14:24.509 { 00:14:24.509 "name": "BaseBdev3", 00:14:24.509 "uuid": "fde489e0-7825-48ac-857b-baddc03c7b18", 00:14:24.509 "is_configured": true, 00:14:24.509 "data_offset": 0, 00:14:24.509 "data_size": 65536 00:14:24.509 } 00:14:24.509 ] 00:14:24.509 }' 00:14:24.509 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.509 10:25:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.078 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:25.078 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.078 10:25:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.078 10:25:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.078 10:25:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.078 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:25.078 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:25.078 10:25:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.078 10:25:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.078 [2024-11-19 10:25:38.712875] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:25.078 10:25:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.078 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:25.078 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.078 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.078 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.078 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.078 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.078 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.078 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.078 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.078 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.078 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.078 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.078 10:25:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.078 10:25:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.078 10:25:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.078 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.078 "name": "Existed_Raid", 00:14:25.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.078 "strip_size_kb": 64, 00:14:25.078 "state": "configuring", 00:14:25.078 "raid_level": "raid5f", 00:14:25.078 "superblock": false, 00:14:25.078 "num_base_bdevs": 3, 00:14:25.078 "num_base_bdevs_discovered": 1, 00:14:25.078 "num_base_bdevs_operational": 3, 00:14:25.078 "base_bdevs_list": [ 00:14:25.078 { 00:14:25.078 "name": null, 00:14:25.078 "uuid": "092d77fd-45c8-49a0-a690-c2f44b1effdd", 00:14:25.078 "is_configured": false, 00:14:25.078 "data_offset": 0, 00:14:25.078 "data_size": 65536 00:14:25.078 }, 00:14:25.078 { 00:14:25.078 "name": null, 00:14:25.078 "uuid": "26ee17be-e4f2-4942-a6d0-6cb989a53b2d", 00:14:25.078 "is_configured": false, 00:14:25.078 "data_offset": 0, 00:14:25.078 "data_size": 65536 00:14:25.078 }, 00:14:25.078 { 00:14:25.078 "name": "BaseBdev3", 00:14:25.078 "uuid": "fde489e0-7825-48ac-857b-baddc03c7b18", 00:14:25.078 "is_configured": true, 00:14:25.078 "data_offset": 0, 00:14:25.078 "data_size": 65536 00:14:25.078 } 00:14:25.078 ] 00:14:25.078 }' 00:14:25.078 10:25:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.078 10:25:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.645 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.645 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:25.645 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.645 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.645 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.645 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:25.645 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:25.645 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.645 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.645 [2024-11-19 10:25:39.308449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:25.646 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.646 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:25.646 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.646 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.646 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.646 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.646 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.646 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.646 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.646 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.646 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.646 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.646 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.646 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.646 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.646 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.646 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.646 "name": "Existed_Raid", 00:14:25.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.646 "strip_size_kb": 64, 00:14:25.646 "state": "configuring", 00:14:25.646 "raid_level": "raid5f", 00:14:25.646 "superblock": false, 00:14:25.646 "num_base_bdevs": 3, 00:14:25.646 "num_base_bdevs_discovered": 2, 00:14:25.646 "num_base_bdevs_operational": 3, 00:14:25.646 "base_bdevs_list": [ 00:14:25.646 { 00:14:25.646 "name": null, 00:14:25.646 "uuid": "092d77fd-45c8-49a0-a690-c2f44b1effdd", 00:14:25.646 "is_configured": false, 00:14:25.646 "data_offset": 0, 00:14:25.646 "data_size": 65536 00:14:25.646 }, 00:14:25.646 { 00:14:25.646 "name": "BaseBdev2", 00:14:25.646 "uuid": "26ee17be-e4f2-4942-a6d0-6cb989a53b2d", 00:14:25.646 "is_configured": true, 00:14:25.646 "data_offset": 0, 00:14:25.646 "data_size": 65536 00:14:25.646 }, 00:14:25.646 { 00:14:25.646 "name": "BaseBdev3", 00:14:25.646 "uuid": "fde489e0-7825-48ac-857b-baddc03c7b18", 00:14:25.646 "is_configured": true, 00:14:25.646 "data_offset": 0, 00:14:25.646 "data_size": 65536 00:14:25.646 } 00:14:25.646 ] 00:14:25.646 }' 00:14:25.646 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.646 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.216 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.216 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.216 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.216 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:26.216 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.216 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:26.216 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.216 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:26.216 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.216 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.216 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.216 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 092d77fd-45c8-49a0-a690-c2f44b1effdd 00:14:26.216 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.216 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.216 [2024-11-19 10:25:39.886258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:26.216 [2024-11-19 10:25:39.886299] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:26.216 [2024-11-19 10:25:39.886308] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:26.216 [2024-11-19 10:25:39.886527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:26.216 [2024-11-19 10:25:39.891378] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:26.216 [2024-11-19 10:25:39.891398] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:26.216 [2024-11-19 10:25:39.891638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.216 NewBaseBdev 00:14:26.216 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.216 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:26.216 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:26.216 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:26.216 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:26.216 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:26.216 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:26.216 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:26.216 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.216 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.216 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.216 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:26.216 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.216 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.216 [ 00:14:26.216 { 00:14:26.216 "name": "NewBaseBdev", 00:14:26.216 "aliases": [ 00:14:26.216 "092d77fd-45c8-49a0-a690-c2f44b1effdd" 00:14:26.216 ], 00:14:26.216 "product_name": "Malloc disk", 00:14:26.216 "block_size": 512, 00:14:26.216 "num_blocks": 65536, 00:14:26.216 "uuid": "092d77fd-45c8-49a0-a690-c2f44b1effdd", 00:14:26.217 "assigned_rate_limits": { 00:14:26.217 "rw_ios_per_sec": 0, 00:14:26.217 "rw_mbytes_per_sec": 0, 00:14:26.217 "r_mbytes_per_sec": 0, 00:14:26.217 "w_mbytes_per_sec": 0 00:14:26.217 }, 00:14:26.217 "claimed": true, 00:14:26.217 "claim_type": "exclusive_write", 00:14:26.217 "zoned": false, 00:14:26.217 "supported_io_types": { 00:14:26.217 "read": true, 00:14:26.217 "write": true, 00:14:26.217 "unmap": true, 00:14:26.217 "flush": true, 00:14:26.217 "reset": true, 00:14:26.217 "nvme_admin": false, 00:14:26.217 "nvme_io": false, 00:14:26.217 "nvme_io_md": false, 00:14:26.217 "write_zeroes": true, 00:14:26.217 "zcopy": true, 00:14:26.217 "get_zone_info": false, 00:14:26.217 "zone_management": false, 00:14:26.217 "zone_append": false, 00:14:26.217 "compare": false, 00:14:26.217 "compare_and_write": false, 00:14:26.217 "abort": true, 00:14:26.217 "seek_hole": false, 00:14:26.217 "seek_data": false, 00:14:26.217 "copy": true, 00:14:26.217 "nvme_iov_md": false 00:14:26.217 }, 00:14:26.217 "memory_domains": [ 00:14:26.217 { 00:14:26.217 "dma_device_id": "system", 00:14:26.217 "dma_device_type": 1 00:14:26.217 }, 00:14:26.217 { 00:14:26.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.217 "dma_device_type": 2 00:14:26.217 } 00:14:26.217 ], 00:14:26.217 "driver_specific": {} 00:14:26.217 } 00:14:26.217 ] 00:14:26.217 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.217 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:26.217 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:26.217 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.217 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.217 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.217 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.217 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.217 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.217 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.217 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.217 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.217 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.217 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.217 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.217 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.217 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.217 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.217 "name": "Existed_Raid", 00:14:26.217 "uuid": "f8283716-58c2-48c1-9626-734b10b2dca1", 00:14:26.217 "strip_size_kb": 64, 00:14:26.217 "state": "online", 00:14:26.217 "raid_level": "raid5f", 00:14:26.217 "superblock": false, 00:14:26.217 "num_base_bdevs": 3, 00:14:26.217 "num_base_bdevs_discovered": 3, 00:14:26.217 "num_base_bdevs_operational": 3, 00:14:26.217 "base_bdevs_list": [ 00:14:26.217 { 00:14:26.217 "name": "NewBaseBdev", 00:14:26.217 "uuid": "092d77fd-45c8-49a0-a690-c2f44b1effdd", 00:14:26.217 "is_configured": true, 00:14:26.217 "data_offset": 0, 00:14:26.217 "data_size": 65536 00:14:26.217 }, 00:14:26.217 { 00:14:26.217 "name": "BaseBdev2", 00:14:26.217 "uuid": "26ee17be-e4f2-4942-a6d0-6cb989a53b2d", 00:14:26.217 "is_configured": true, 00:14:26.217 "data_offset": 0, 00:14:26.217 "data_size": 65536 00:14:26.217 }, 00:14:26.217 { 00:14:26.217 "name": "BaseBdev3", 00:14:26.217 "uuid": "fde489e0-7825-48ac-857b-baddc03c7b18", 00:14:26.217 "is_configured": true, 00:14:26.217 "data_offset": 0, 00:14:26.217 "data_size": 65536 00:14:26.217 } 00:14:26.217 ] 00:14:26.217 }' 00:14:26.217 10:25:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.217 10:25:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.789 10:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:26.789 10:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:26.790 10:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:26.790 10:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:26.790 10:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:26.790 10:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:26.790 10:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:26.790 10:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.790 10:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:26.790 10:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.790 [2024-11-19 10:25:40.445113] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:26.790 10:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.790 10:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:26.790 "name": "Existed_Raid", 00:14:26.790 "aliases": [ 00:14:26.790 "f8283716-58c2-48c1-9626-734b10b2dca1" 00:14:26.790 ], 00:14:26.790 "product_name": "Raid Volume", 00:14:26.790 "block_size": 512, 00:14:26.790 "num_blocks": 131072, 00:14:26.790 "uuid": "f8283716-58c2-48c1-9626-734b10b2dca1", 00:14:26.790 "assigned_rate_limits": { 00:14:26.790 "rw_ios_per_sec": 0, 00:14:26.790 "rw_mbytes_per_sec": 0, 00:14:26.790 "r_mbytes_per_sec": 0, 00:14:26.790 "w_mbytes_per_sec": 0 00:14:26.790 }, 00:14:26.790 "claimed": false, 00:14:26.790 "zoned": false, 00:14:26.790 "supported_io_types": { 00:14:26.790 "read": true, 00:14:26.790 "write": true, 00:14:26.790 "unmap": false, 00:14:26.790 "flush": false, 00:14:26.790 "reset": true, 00:14:26.790 "nvme_admin": false, 00:14:26.790 "nvme_io": false, 00:14:26.790 "nvme_io_md": false, 00:14:26.790 "write_zeroes": true, 00:14:26.790 "zcopy": false, 00:14:26.790 "get_zone_info": false, 00:14:26.790 "zone_management": false, 00:14:26.790 "zone_append": false, 00:14:26.790 "compare": false, 00:14:26.790 "compare_and_write": false, 00:14:26.790 "abort": false, 00:14:26.790 "seek_hole": false, 00:14:26.790 "seek_data": false, 00:14:26.790 "copy": false, 00:14:26.790 "nvme_iov_md": false 00:14:26.790 }, 00:14:26.790 "driver_specific": { 00:14:26.790 "raid": { 00:14:26.790 "uuid": "f8283716-58c2-48c1-9626-734b10b2dca1", 00:14:26.790 "strip_size_kb": 64, 00:14:26.790 "state": "online", 00:14:26.790 "raid_level": "raid5f", 00:14:26.790 "superblock": false, 00:14:26.790 "num_base_bdevs": 3, 00:14:26.790 "num_base_bdevs_discovered": 3, 00:14:26.790 "num_base_bdevs_operational": 3, 00:14:26.790 "base_bdevs_list": [ 00:14:26.790 { 00:14:26.790 "name": "NewBaseBdev", 00:14:26.790 "uuid": "092d77fd-45c8-49a0-a690-c2f44b1effdd", 00:14:26.790 "is_configured": true, 00:14:26.790 "data_offset": 0, 00:14:26.790 "data_size": 65536 00:14:26.790 }, 00:14:26.790 { 00:14:26.790 "name": "BaseBdev2", 00:14:26.790 "uuid": "26ee17be-e4f2-4942-a6d0-6cb989a53b2d", 00:14:26.790 "is_configured": true, 00:14:26.790 "data_offset": 0, 00:14:26.790 "data_size": 65536 00:14:26.790 }, 00:14:26.790 { 00:14:26.790 "name": "BaseBdev3", 00:14:26.790 "uuid": "fde489e0-7825-48ac-857b-baddc03c7b18", 00:14:26.790 "is_configured": true, 00:14:26.790 "data_offset": 0, 00:14:26.790 "data_size": 65536 00:14:26.790 } 00:14:26.790 ] 00:14:26.790 } 00:14:26.790 } 00:14:26.790 }' 00:14:26.790 10:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:26.790 10:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:26.790 BaseBdev2 00:14:26.790 BaseBdev3' 00:14:26.790 10:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.049 [2024-11-19 10:25:40.708447] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:27.049 [2024-11-19 10:25:40.708472] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:27.049 [2024-11-19 10:25:40.708538] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:27.049 [2024-11-19 10:25:40.708803] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:27.049 [2024-11-19 10:25:40.708815] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79594 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 79594 ']' 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 79594 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79594 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79594' 00:14:27.049 killing process with pid 79594 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 79594 00:14:27.049 [2024-11-19 10:25:40.742731] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:27.049 10:25:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 79594 00:14:27.308 [2024-11-19 10:25:41.021162] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:28.687 ************************************ 00:14:28.687 END TEST raid5f_state_function_test 00:14:28.687 ************************************ 00:14:28.687 10:25:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:28.687 00:14:28.687 real 0m10.513s 00:14:28.687 user 0m16.815s 00:14:28.687 sys 0m1.875s 00:14:28.687 10:25:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:28.687 10:25:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.687 10:25:42 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:14:28.687 10:25:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:28.687 10:25:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:28.687 10:25:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:28.687 ************************************ 00:14:28.687 START TEST raid5f_state_function_test_sb 00:14:28.687 ************************************ 00:14:28.687 10:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:14:28.687 10:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:28.687 10:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:28.687 10:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:28.687 10:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:28.687 10:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:28.688 10:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:28.688 10:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:28.688 10:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:28.688 10:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:28.688 10:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:28.688 10:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:28.688 10:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:28.688 10:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:28.688 10:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:28.688 10:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:28.688 10:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:28.688 10:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:28.688 10:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:28.688 10:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:28.688 10:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:28.688 10:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:28.688 10:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:28.688 10:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:28.688 10:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:28.688 10:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:28.688 10:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:28.688 10:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80210 00:14:28.688 10:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:28.688 10:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80210' 00:14:28.688 Process raid pid: 80210 00:14:28.688 10:25:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80210 00:14:28.688 10:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80210 ']' 00:14:28.688 10:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.688 10:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:28.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.688 10:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.688 10:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:28.688 10:25:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.688 [2024-11-19 10:25:42.224171] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:14:28.688 [2024-11-19 10:25:42.224395] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.688 [2024-11-19 10:25:42.400457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.948 [2024-11-19 10:25:42.508234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.948 [2024-11-19 10:25:42.696100] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:28.948 [2024-11-19 10:25:42.696134] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.517 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:29.517 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:29.517 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:29.517 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.517 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.517 [2024-11-19 10:25:43.039341] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:29.517 [2024-11-19 10:25:43.039393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:29.517 [2024-11-19 10:25:43.039404] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:29.517 [2024-11-19 10:25:43.039413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:29.517 [2024-11-19 10:25:43.039419] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:29.517 [2024-11-19 10:25:43.039427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:29.517 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.517 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:29.517 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:29.517 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:29.517 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:29.517 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.517 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:29.517 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.517 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.517 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.517 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.517 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.517 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.517 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.517 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.517 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.517 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.517 "name": "Existed_Raid", 00:14:29.517 "uuid": "fa0f9df3-7cec-49fe-aef0-603798dfbb7f", 00:14:29.517 "strip_size_kb": 64, 00:14:29.517 "state": "configuring", 00:14:29.517 "raid_level": "raid5f", 00:14:29.517 "superblock": true, 00:14:29.517 "num_base_bdevs": 3, 00:14:29.517 "num_base_bdevs_discovered": 0, 00:14:29.517 "num_base_bdevs_operational": 3, 00:14:29.517 "base_bdevs_list": [ 00:14:29.517 { 00:14:29.517 "name": "BaseBdev1", 00:14:29.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.517 "is_configured": false, 00:14:29.517 "data_offset": 0, 00:14:29.517 "data_size": 0 00:14:29.517 }, 00:14:29.517 { 00:14:29.517 "name": "BaseBdev2", 00:14:29.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.517 "is_configured": false, 00:14:29.517 "data_offset": 0, 00:14:29.517 "data_size": 0 00:14:29.517 }, 00:14:29.517 { 00:14:29.517 "name": "BaseBdev3", 00:14:29.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.517 "is_configured": false, 00:14:29.517 "data_offset": 0, 00:14:29.517 "data_size": 0 00:14:29.517 } 00:14:29.517 ] 00:14:29.517 }' 00:14:29.517 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.517 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.777 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:29.777 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.777 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.777 [2024-11-19 10:25:43.490493] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:29.777 [2024-11-19 10:25:43.490573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:29.777 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.777 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:29.777 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.777 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.777 [2024-11-19 10:25:43.502472] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:29.777 [2024-11-19 10:25:43.502562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:29.777 [2024-11-19 10:25:43.502589] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:29.777 [2024-11-19 10:25:43.502611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:29.777 [2024-11-19 10:25:43.502628] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:29.777 [2024-11-19 10:25:43.502648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:29.777 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.777 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:29.777 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.777 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:29.777 [2024-11-19 10:25:43.550216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:29.777 BaseBdev1 00:14:29.777 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.777 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:29.777 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:29.777 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:29.777 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:29.777 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:29.777 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:29.777 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:29.777 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.777 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.037 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.037 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:30.037 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.037 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.037 [ 00:14:30.037 { 00:14:30.037 "name": "BaseBdev1", 00:14:30.037 "aliases": [ 00:14:30.037 "a1c8d56a-2cd4-4550-af35-901560436fe7" 00:14:30.037 ], 00:14:30.037 "product_name": "Malloc disk", 00:14:30.037 "block_size": 512, 00:14:30.037 "num_blocks": 65536, 00:14:30.037 "uuid": "a1c8d56a-2cd4-4550-af35-901560436fe7", 00:14:30.037 "assigned_rate_limits": { 00:14:30.037 "rw_ios_per_sec": 0, 00:14:30.037 "rw_mbytes_per_sec": 0, 00:14:30.037 "r_mbytes_per_sec": 0, 00:14:30.037 "w_mbytes_per_sec": 0 00:14:30.037 }, 00:14:30.037 "claimed": true, 00:14:30.037 "claim_type": "exclusive_write", 00:14:30.037 "zoned": false, 00:14:30.037 "supported_io_types": { 00:14:30.037 "read": true, 00:14:30.037 "write": true, 00:14:30.037 "unmap": true, 00:14:30.037 "flush": true, 00:14:30.037 "reset": true, 00:14:30.037 "nvme_admin": false, 00:14:30.037 "nvme_io": false, 00:14:30.037 "nvme_io_md": false, 00:14:30.037 "write_zeroes": true, 00:14:30.037 "zcopy": true, 00:14:30.037 "get_zone_info": false, 00:14:30.037 "zone_management": false, 00:14:30.037 "zone_append": false, 00:14:30.037 "compare": false, 00:14:30.037 "compare_and_write": false, 00:14:30.037 "abort": true, 00:14:30.037 "seek_hole": false, 00:14:30.037 "seek_data": false, 00:14:30.037 "copy": true, 00:14:30.037 "nvme_iov_md": false 00:14:30.037 }, 00:14:30.037 "memory_domains": [ 00:14:30.037 { 00:14:30.037 "dma_device_id": "system", 00:14:30.037 "dma_device_type": 1 00:14:30.037 }, 00:14:30.037 { 00:14:30.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.037 "dma_device_type": 2 00:14:30.037 } 00:14:30.037 ], 00:14:30.037 "driver_specific": {} 00:14:30.037 } 00:14:30.037 ] 00:14:30.038 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.038 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:30.038 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:30.038 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.038 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.038 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.038 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.038 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.038 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.038 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.038 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.038 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.038 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.038 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.038 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.038 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.038 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.038 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.038 "name": "Existed_Raid", 00:14:30.038 "uuid": "d7e08a46-9196-4db4-81f1-7c75e5178ba0", 00:14:30.038 "strip_size_kb": 64, 00:14:30.038 "state": "configuring", 00:14:30.038 "raid_level": "raid5f", 00:14:30.038 "superblock": true, 00:14:30.038 "num_base_bdevs": 3, 00:14:30.038 "num_base_bdevs_discovered": 1, 00:14:30.038 "num_base_bdevs_operational": 3, 00:14:30.038 "base_bdevs_list": [ 00:14:30.038 { 00:14:30.038 "name": "BaseBdev1", 00:14:30.038 "uuid": "a1c8d56a-2cd4-4550-af35-901560436fe7", 00:14:30.038 "is_configured": true, 00:14:30.038 "data_offset": 2048, 00:14:30.038 "data_size": 63488 00:14:30.038 }, 00:14:30.038 { 00:14:30.038 "name": "BaseBdev2", 00:14:30.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.038 "is_configured": false, 00:14:30.038 "data_offset": 0, 00:14:30.038 "data_size": 0 00:14:30.038 }, 00:14:30.038 { 00:14:30.038 "name": "BaseBdev3", 00:14:30.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.038 "is_configured": false, 00:14:30.038 "data_offset": 0, 00:14:30.038 "data_size": 0 00:14:30.038 } 00:14:30.038 ] 00:14:30.038 }' 00:14:30.038 10:25:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.038 10:25:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.297 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:30.297 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.297 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.297 [2024-11-19 10:25:44.041376] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:30.297 [2024-11-19 10:25:44.041415] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:30.297 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.297 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:30.297 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.297 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.297 [2024-11-19 10:25:44.049424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:30.297 [2024-11-19 10:25:44.051128] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:30.297 [2024-11-19 10:25:44.051210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:30.297 [2024-11-19 10:25:44.051224] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:30.297 [2024-11-19 10:25:44.051233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:30.297 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.297 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:30.297 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:30.297 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:30.297 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.297 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.297 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.297 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.297 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.297 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.297 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.298 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.298 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.298 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.298 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.298 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.298 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.558 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.558 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.558 "name": "Existed_Raid", 00:14:30.558 "uuid": "f79ba45f-f729-4712-b13f-b241f51f36d6", 00:14:30.558 "strip_size_kb": 64, 00:14:30.558 "state": "configuring", 00:14:30.558 "raid_level": "raid5f", 00:14:30.558 "superblock": true, 00:14:30.558 "num_base_bdevs": 3, 00:14:30.558 "num_base_bdevs_discovered": 1, 00:14:30.558 "num_base_bdevs_operational": 3, 00:14:30.558 "base_bdevs_list": [ 00:14:30.558 { 00:14:30.558 "name": "BaseBdev1", 00:14:30.558 "uuid": "a1c8d56a-2cd4-4550-af35-901560436fe7", 00:14:30.558 "is_configured": true, 00:14:30.558 "data_offset": 2048, 00:14:30.558 "data_size": 63488 00:14:30.558 }, 00:14:30.558 { 00:14:30.558 "name": "BaseBdev2", 00:14:30.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.558 "is_configured": false, 00:14:30.558 "data_offset": 0, 00:14:30.558 "data_size": 0 00:14:30.558 }, 00:14:30.558 { 00:14:30.558 "name": "BaseBdev3", 00:14:30.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.558 "is_configured": false, 00:14:30.558 "data_offset": 0, 00:14:30.558 "data_size": 0 00:14:30.558 } 00:14:30.558 ] 00:14:30.558 }' 00:14:30.558 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.558 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.818 [2024-11-19 10:25:44.513315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:30.818 BaseBdev2 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.818 [ 00:14:30.818 { 00:14:30.818 "name": "BaseBdev2", 00:14:30.818 "aliases": [ 00:14:30.818 "826c157b-285b-493f-80df-b0af07b75c6d" 00:14:30.818 ], 00:14:30.818 "product_name": "Malloc disk", 00:14:30.818 "block_size": 512, 00:14:30.818 "num_blocks": 65536, 00:14:30.818 "uuid": "826c157b-285b-493f-80df-b0af07b75c6d", 00:14:30.818 "assigned_rate_limits": { 00:14:30.818 "rw_ios_per_sec": 0, 00:14:30.818 "rw_mbytes_per_sec": 0, 00:14:30.818 "r_mbytes_per_sec": 0, 00:14:30.818 "w_mbytes_per_sec": 0 00:14:30.818 }, 00:14:30.818 "claimed": true, 00:14:30.818 "claim_type": "exclusive_write", 00:14:30.818 "zoned": false, 00:14:30.818 "supported_io_types": { 00:14:30.818 "read": true, 00:14:30.818 "write": true, 00:14:30.818 "unmap": true, 00:14:30.818 "flush": true, 00:14:30.818 "reset": true, 00:14:30.818 "nvme_admin": false, 00:14:30.818 "nvme_io": false, 00:14:30.818 "nvme_io_md": false, 00:14:30.818 "write_zeroes": true, 00:14:30.818 "zcopy": true, 00:14:30.818 "get_zone_info": false, 00:14:30.818 "zone_management": false, 00:14:30.818 "zone_append": false, 00:14:30.818 "compare": false, 00:14:30.818 "compare_and_write": false, 00:14:30.818 "abort": true, 00:14:30.818 "seek_hole": false, 00:14:30.818 "seek_data": false, 00:14:30.818 "copy": true, 00:14:30.818 "nvme_iov_md": false 00:14:30.818 }, 00:14:30.818 "memory_domains": [ 00:14:30.818 { 00:14:30.818 "dma_device_id": "system", 00:14:30.818 "dma_device_type": 1 00:14:30.818 }, 00:14:30.818 { 00:14:30.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.818 "dma_device_type": 2 00:14:30.818 } 00:14:30.818 ], 00:14:30.818 "driver_specific": {} 00:14:30.818 } 00:14:30.818 ] 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:30.818 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.079 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.079 "name": "Existed_Raid", 00:14:31.079 "uuid": "f79ba45f-f729-4712-b13f-b241f51f36d6", 00:14:31.079 "strip_size_kb": 64, 00:14:31.079 "state": "configuring", 00:14:31.079 "raid_level": "raid5f", 00:14:31.079 "superblock": true, 00:14:31.079 "num_base_bdevs": 3, 00:14:31.079 "num_base_bdevs_discovered": 2, 00:14:31.079 "num_base_bdevs_operational": 3, 00:14:31.079 "base_bdevs_list": [ 00:14:31.079 { 00:14:31.079 "name": "BaseBdev1", 00:14:31.079 "uuid": "a1c8d56a-2cd4-4550-af35-901560436fe7", 00:14:31.079 "is_configured": true, 00:14:31.079 "data_offset": 2048, 00:14:31.079 "data_size": 63488 00:14:31.079 }, 00:14:31.079 { 00:14:31.079 "name": "BaseBdev2", 00:14:31.079 "uuid": "826c157b-285b-493f-80df-b0af07b75c6d", 00:14:31.079 "is_configured": true, 00:14:31.079 "data_offset": 2048, 00:14:31.079 "data_size": 63488 00:14:31.079 }, 00:14:31.079 { 00:14:31.079 "name": "BaseBdev3", 00:14:31.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:31.079 "is_configured": false, 00:14:31.079 "data_offset": 0, 00:14:31.079 "data_size": 0 00:14:31.079 } 00:14:31.079 ] 00:14:31.079 }' 00:14:31.079 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.079 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.339 10:25:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:31.339 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.339 10:25:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.339 [2024-11-19 10:25:45.049434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:31.339 [2024-11-19 10:25:45.049694] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:31.339 [2024-11-19 10:25:45.049718] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:31.339 [2024-11-19 10:25:45.049973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:31.339 BaseBdev3 00:14:31.339 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.339 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:31.339 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:31.339 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:31.339 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:31.339 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:31.339 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:31.339 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:31.339 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.339 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.339 [2024-11-19 10:25:45.055517] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:31.339 [2024-11-19 10:25:45.055593] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:31.339 [2024-11-19 10:25:45.055796] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.339 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.339 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:31.339 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.339 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.339 [ 00:14:31.339 { 00:14:31.339 "name": "BaseBdev3", 00:14:31.339 "aliases": [ 00:14:31.339 "3263dfea-2f8b-4b10-918a-26ebf2a16da0" 00:14:31.339 ], 00:14:31.339 "product_name": "Malloc disk", 00:14:31.339 "block_size": 512, 00:14:31.339 "num_blocks": 65536, 00:14:31.339 "uuid": "3263dfea-2f8b-4b10-918a-26ebf2a16da0", 00:14:31.339 "assigned_rate_limits": { 00:14:31.339 "rw_ios_per_sec": 0, 00:14:31.339 "rw_mbytes_per_sec": 0, 00:14:31.339 "r_mbytes_per_sec": 0, 00:14:31.339 "w_mbytes_per_sec": 0 00:14:31.339 }, 00:14:31.339 "claimed": true, 00:14:31.339 "claim_type": "exclusive_write", 00:14:31.339 "zoned": false, 00:14:31.339 "supported_io_types": { 00:14:31.339 "read": true, 00:14:31.339 "write": true, 00:14:31.339 "unmap": true, 00:14:31.339 "flush": true, 00:14:31.339 "reset": true, 00:14:31.339 "nvme_admin": false, 00:14:31.339 "nvme_io": false, 00:14:31.339 "nvme_io_md": false, 00:14:31.339 "write_zeroes": true, 00:14:31.339 "zcopy": true, 00:14:31.339 "get_zone_info": false, 00:14:31.339 "zone_management": false, 00:14:31.339 "zone_append": false, 00:14:31.339 "compare": false, 00:14:31.339 "compare_and_write": false, 00:14:31.339 "abort": true, 00:14:31.339 "seek_hole": false, 00:14:31.339 "seek_data": false, 00:14:31.339 "copy": true, 00:14:31.339 "nvme_iov_md": false 00:14:31.339 }, 00:14:31.339 "memory_domains": [ 00:14:31.339 { 00:14:31.339 "dma_device_id": "system", 00:14:31.339 "dma_device_type": 1 00:14:31.339 }, 00:14:31.339 { 00:14:31.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.339 "dma_device_type": 2 00:14:31.339 } 00:14:31.339 ], 00:14:31.339 "driver_specific": {} 00:14:31.339 } 00:14:31.339 ] 00:14:31.339 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.339 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:31.339 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:31.339 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:31.339 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:31.339 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:31.339 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.339 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.339 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.339 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.340 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.340 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.340 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.340 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.340 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.340 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:31.340 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.340 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.340 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.598 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.598 "name": "Existed_Raid", 00:14:31.598 "uuid": "f79ba45f-f729-4712-b13f-b241f51f36d6", 00:14:31.598 "strip_size_kb": 64, 00:14:31.598 "state": "online", 00:14:31.598 "raid_level": "raid5f", 00:14:31.598 "superblock": true, 00:14:31.598 "num_base_bdevs": 3, 00:14:31.598 "num_base_bdevs_discovered": 3, 00:14:31.598 "num_base_bdevs_operational": 3, 00:14:31.598 "base_bdevs_list": [ 00:14:31.598 { 00:14:31.598 "name": "BaseBdev1", 00:14:31.598 "uuid": "a1c8d56a-2cd4-4550-af35-901560436fe7", 00:14:31.598 "is_configured": true, 00:14:31.598 "data_offset": 2048, 00:14:31.598 "data_size": 63488 00:14:31.598 }, 00:14:31.598 { 00:14:31.598 "name": "BaseBdev2", 00:14:31.598 "uuid": "826c157b-285b-493f-80df-b0af07b75c6d", 00:14:31.598 "is_configured": true, 00:14:31.598 "data_offset": 2048, 00:14:31.598 "data_size": 63488 00:14:31.598 }, 00:14:31.598 { 00:14:31.598 "name": "BaseBdev3", 00:14:31.598 "uuid": "3263dfea-2f8b-4b10-918a-26ebf2a16da0", 00:14:31.598 "is_configured": true, 00:14:31.598 "data_offset": 2048, 00:14:31.598 "data_size": 63488 00:14:31.598 } 00:14:31.598 ] 00:14:31.598 }' 00:14:31.598 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.598 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.858 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:31.858 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:31.858 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:31.858 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:31.858 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:31.858 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:31.858 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:31.858 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:31.858 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.859 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:31.859 [2024-11-19 10:25:45.537220] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:31.859 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.859 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:31.859 "name": "Existed_Raid", 00:14:31.859 "aliases": [ 00:14:31.859 "f79ba45f-f729-4712-b13f-b241f51f36d6" 00:14:31.859 ], 00:14:31.859 "product_name": "Raid Volume", 00:14:31.859 "block_size": 512, 00:14:31.859 "num_blocks": 126976, 00:14:31.859 "uuid": "f79ba45f-f729-4712-b13f-b241f51f36d6", 00:14:31.859 "assigned_rate_limits": { 00:14:31.859 "rw_ios_per_sec": 0, 00:14:31.859 "rw_mbytes_per_sec": 0, 00:14:31.859 "r_mbytes_per_sec": 0, 00:14:31.859 "w_mbytes_per_sec": 0 00:14:31.859 }, 00:14:31.859 "claimed": false, 00:14:31.859 "zoned": false, 00:14:31.859 "supported_io_types": { 00:14:31.859 "read": true, 00:14:31.859 "write": true, 00:14:31.859 "unmap": false, 00:14:31.859 "flush": false, 00:14:31.859 "reset": true, 00:14:31.859 "nvme_admin": false, 00:14:31.859 "nvme_io": false, 00:14:31.859 "nvme_io_md": false, 00:14:31.859 "write_zeroes": true, 00:14:31.859 "zcopy": false, 00:14:31.859 "get_zone_info": false, 00:14:31.859 "zone_management": false, 00:14:31.859 "zone_append": false, 00:14:31.859 "compare": false, 00:14:31.859 "compare_and_write": false, 00:14:31.859 "abort": false, 00:14:31.859 "seek_hole": false, 00:14:31.859 "seek_data": false, 00:14:31.859 "copy": false, 00:14:31.859 "nvme_iov_md": false 00:14:31.859 }, 00:14:31.859 "driver_specific": { 00:14:31.859 "raid": { 00:14:31.859 "uuid": "f79ba45f-f729-4712-b13f-b241f51f36d6", 00:14:31.859 "strip_size_kb": 64, 00:14:31.859 "state": "online", 00:14:31.859 "raid_level": "raid5f", 00:14:31.859 "superblock": true, 00:14:31.859 "num_base_bdevs": 3, 00:14:31.859 "num_base_bdevs_discovered": 3, 00:14:31.859 "num_base_bdevs_operational": 3, 00:14:31.859 "base_bdevs_list": [ 00:14:31.859 { 00:14:31.859 "name": "BaseBdev1", 00:14:31.859 "uuid": "a1c8d56a-2cd4-4550-af35-901560436fe7", 00:14:31.859 "is_configured": true, 00:14:31.859 "data_offset": 2048, 00:14:31.859 "data_size": 63488 00:14:31.859 }, 00:14:31.859 { 00:14:31.859 "name": "BaseBdev2", 00:14:31.859 "uuid": "826c157b-285b-493f-80df-b0af07b75c6d", 00:14:31.859 "is_configured": true, 00:14:31.859 "data_offset": 2048, 00:14:31.859 "data_size": 63488 00:14:31.859 }, 00:14:31.859 { 00:14:31.859 "name": "BaseBdev3", 00:14:31.859 "uuid": "3263dfea-2f8b-4b10-918a-26ebf2a16da0", 00:14:31.859 "is_configured": true, 00:14:31.859 "data_offset": 2048, 00:14:31.859 "data_size": 63488 00:14:31.859 } 00:14:31.859 ] 00:14:31.859 } 00:14:31.859 } 00:14:31.859 }' 00:14:31.859 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:31.859 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:31.859 BaseBdev2 00:14:31.859 BaseBdev3' 00:14:31.859 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:32.119 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:32.119 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:32.119 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:32.119 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:32.119 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.119 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.119 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.119 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:32.119 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:32.119 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:32.119 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:32.119 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:32.119 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.119 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.119 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.119 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:32.119 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:32.119 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:32.119 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:32.119 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:32.119 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.119 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.119 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.119 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:32.119 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:32.119 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:32.119 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.119 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.119 [2024-11-19 10:25:45.816580] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:32.379 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.379 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:32.379 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:32.379 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:32.379 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:32.379 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:32.379 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:32.379 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:32.379 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.379 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.379 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.379 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:32.379 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.379 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.379 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.379 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.379 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.379 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.379 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.379 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.379 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.379 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.379 "name": "Existed_Raid", 00:14:32.379 "uuid": "f79ba45f-f729-4712-b13f-b241f51f36d6", 00:14:32.379 "strip_size_kb": 64, 00:14:32.379 "state": "online", 00:14:32.379 "raid_level": "raid5f", 00:14:32.379 "superblock": true, 00:14:32.379 "num_base_bdevs": 3, 00:14:32.379 "num_base_bdevs_discovered": 2, 00:14:32.379 "num_base_bdevs_operational": 2, 00:14:32.379 "base_bdevs_list": [ 00:14:32.379 { 00:14:32.379 "name": null, 00:14:32.379 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.379 "is_configured": false, 00:14:32.379 "data_offset": 0, 00:14:32.379 "data_size": 63488 00:14:32.379 }, 00:14:32.379 { 00:14:32.379 "name": "BaseBdev2", 00:14:32.379 "uuid": "826c157b-285b-493f-80df-b0af07b75c6d", 00:14:32.379 "is_configured": true, 00:14:32.379 "data_offset": 2048, 00:14:32.379 "data_size": 63488 00:14:32.379 }, 00:14:32.379 { 00:14:32.379 "name": "BaseBdev3", 00:14:32.379 "uuid": "3263dfea-2f8b-4b10-918a-26ebf2a16da0", 00:14:32.379 "is_configured": true, 00:14:32.379 "data_offset": 2048, 00:14:32.379 "data_size": 63488 00:14:32.379 } 00:14:32.379 ] 00:14:32.379 }' 00:14:32.379 10:25:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.379 10:25:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.639 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:32.639 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:32.639 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:32.639 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.639 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.639 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.639 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.639 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:32.639 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:32.639 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:32.639 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.639 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.639 [2024-11-19 10:25:46.384285] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:32.639 [2024-11-19 10:25:46.384507] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:32.898 [2024-11-19 10:25:46.474244] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.898 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.898 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:32.898 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:32.898 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.898 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.898 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:32.899 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.899 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.899 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:32.899 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:32.899 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:32.899 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.899 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.899 [2024-11-19 10:25:46.534173] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:32.899 [2024-11-19 10:25:46.534216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:32.899 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.899 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:32.899 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:32.899 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.899 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:32.899 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.899 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.899 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.899 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:32.899 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:32.899 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:32.899 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:32.899 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:32.899 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:32.899 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.899 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.160 BaseBdev2 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.160 [ 00:14:33.160 { 00:14:33.160 "name": "BaseBdev2", 00:14:33.160 "aliases": [ 00:14:33.160 "246002b2-72be-4009-be24-d023c2d539a4" 00:14:33.160 ], 00:14:33.160 "product_name": "Malloc disk", 00:14:33.160 "block_size": 512, 00:14:33.160 "num_blocks": 65536, 00:14:33.160 "uuid": "246002b2-72be-4009-be24-d023c2d539a4", 00:14:33.160 "assigned_rate_limits": { 00:14:33.160 "rw_ios_per_sec": 0, 00:14:33.160 "rw_mbytes_per_sec": 0, 00:14:33.160 "r_mbytes_per_sec": 0, 00:14:33.160 "w_mbytes_per_sec": 0 00:14:33.160 }, 00:14:33.160 "claimed": false, 00:14:33.160 "zoned": false, 00:14:33.160 "supported_io_types": { 00:14:33.160 "read": true, 00:14:33.160 "write": true, 00:14:33.160 "unmap": true, 00:14:33.160 "flush": true, 00:14:33.160 "reset": true, 00:14:33.160 "nvme_admin": false, 00:14:33.160 "nvme_io": false, 00:14:33.160 "nvme_io_md": false, 00:14:33.160 "write_zeroes": true, 00:14:33.160 "zcopy": true, 00:14:33.160 "get_zone_info": false, 00:14:33.160 "zone_management": false, 00:14:33.160 "zone_append": false, 00:14:33.160 "compare": false, 00:14:33.160 "compare_and_write": false, 00:14:33.160 "abort": true, 00:14:33.160 "seek_hole": false, 00:14:33.160 "seek_data": false, 00:14:33.160 "copy": true, 00:14:33.160 "nvme_iov_md": false 00:14:33.160 }, 00:14:33.160 "memory_domains": [ 00:14:33.160 { 00:14:33.160 "dma_device_id": "system", 00:14:33.160 "dma_device_type": 1 00:14:33.160 }, 00:14:33.160 { 00:14:33.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.160 "dma_device_type": 2 00:14:33.160 } 00:14:33.160 ], 00:14:33.160 "driver_specific": {} 00:14:33.160 } 00:14:33.160 ] 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.160 BaseBdev3 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.160 [ 00:14:33.160 { 00:14:33.160 "name": "BaseBdev3", 00:14:33.160 "aliases": [ 00:14:33.160 "b3d271cf-9e04-445a-8688-b98c57fc598c" 00:14:33.160 ], 00:14:33.160 "product_name": "Malloc disk", 00:14:33.160 "block_size": 512, 00:14:33.160 "num_blocks": 65536, 00:14:33.160 "uuid": "b3d271cf-9e04-445a-8688-b98c57fc598c", 00:14:33.160 "assigned_rate_limits": { 00:14:33.160 "rw_ios_per_sec": 0, 00:14:33.160 "rw_mbytes_per_sec": 0, 00:14:33.160 "r_mbytes_per_sec": 0, 00:14:33.160 "w_mbytes_per_sec": 0 00:14:33.160 }, 00:14:33.160 "claimed": false, 00:14:33.160 "zoned": false, 00:14:33.160 "supported_io_types": { 00:14:33.160 "read": true, 00:14:33.160 "write": true, 00:14:33.160 "unmap": true, 00:14:33.160 "flush": true, 00:14:33.160 "reset": true, 00:14:33.160 "nvme_admin": false, 00:14:33.160 "nvme_io": false, 00:14:33.160 "nvme_io_md": false, 00:14:33.160 "write_zeroes": true, 00:14:33.160 "zcopy": true, 00:14:33.160 "get_zone_info": false, 00:14:33.160 "zone_management": false, 00:14:33.160 "zone_append": false, 00:14:33.160 "compare": false, 00:14:33.160 "compare_and_write": false, 00:14:33.160 "abort": true, 00:14:33.160 "seek_hole": false, 00:14:33.160 "seek_data": false, 00:14:33.160 "copy": true, 00:14:33.160 "nvme_iov_md": false 00:14:33.160 }, 00:14:33.160 "memory_domains": [ 00:14:33.160 { 00:14:33.160 "dma_device_id": "system", 00:14:33.160 "dma_device_type": 1 00:14:33.160 }, 00:14:33.160 { 00:14:33.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.160 "dma_device_type": 2 00:14:33.160 } 00:14:33.160 ], 00:14:33.160 "driver_specific": {} 00:14:33.160 } 00:14:33.160 ] 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:33.160 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:33.161 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:33.161 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:33.161 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.161 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.161 [2024-11-19 10:25:46.848436] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:33.161 [2024-11-19 10:25:46.848519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:33.161 [2024-11-19 10:25:46.848574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:33.161 [2024-11-19 10:25:46.850252] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:33.161 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.161 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:33.161 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.161 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:33.161 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.161 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.161 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.161 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.161 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.161 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.161 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.161 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.161 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.161 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.161 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.161 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.161 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.161 "name": "Existed_Raid", 00:14:33.161 "uuid": "8201d8dd-567f-45a5-b2ca-5ef1e96fd9a6", 00:14:33.161 "strip_size_kb": 64, 00:14:33.161 "state": "configuring", 00:14:33.161 "raid_level": "raid5f", 00:14:33.161 "superblock": true, 00:14:33.161 "num_base_bdevs": 3, 00:14:33.161 "num_base_bdevs_discovered": 2, 00:14:33.161 "num_base_bdevs_operational": 3, 00:14:33.161 "base_bdevs_list": [ 00:14:33.161 { 00:14:33.161 "name": "BaseBdev1", 00:14:33.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.161 "is_configured": false, 00:14:33.161 "data_offset": 0, 00:14:33.161 "data_size": 0 00:14:33.161 }, 00:14:33.161 { 00:14:33.161 "name": "BaseBdev2", 00:14:33.161 "uuid": "246002b2-72be-4009-be24-d023c2d539a4", 00:14:33.161 "is_configured": true, 00:14:33.161 "data_offset": 2048, 00:14:33.161 "data_size": 63488 00:14:33.161 }, 00:14:33.161 { 00:14:33.161 "name": "BaseBdev3", 00:14:33.161 "uuid": "b3d271cf-9e04-445a-8688-b98c57fc598c", 00:14:33.161 "is_configured": true, 00:14:33.161 "data_offset": 2048, 00:14:33.161 "data_size": 63488 00:14:33.161 } 00:14:33.161 ] 00:14:33.161 }' 00:14:33.161 10:25:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.161 10:25:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.730 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:33.731 10:25:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.731 10:25:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.731 [2024-11-19 10:25:47.283648] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:33.731 10:25:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.731 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:33.731 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:33.731 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:33.731 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.731 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.731 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.731 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.731 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.731 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.731 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.731 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.731 10:25:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.731 10:25:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.731 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.731 10:25:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.731 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.731 "name": "Existed_Raid", 00:14:33.731 "uuid": "8201d8dd-567f-45a5-b2ca-5ef1e96fd9a6", 00:14:33.731 "strip_size_kb": 64, 00:14:33.731 "state": "configuring", 00:14:33.731 "raid_level": "raid5f", 00:14:33.731 "superblock": true, 00:14:33.731 "num_base_bdevs": 3, 00:14:33.731 "num_base_bdevs_discovered": 1, 00:14:33.731 "num_base_bdevs_operational": 3, 00:14:33.731 "base_bdevs_list": [ 00:14:33.731 { 00:14:33.731 "name": "BaseBdev1", 00:14:33.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.731 "is_configured": false, 00:14:33.731 "data_offset": 0, 00:14:33.731 "data_size": 0 00:14:33.731 }, 00:14:33.731 { 00:14:33.731 "name": null, 00:14:33.731 "uuid": "246002b2-72be-4009-be24-d023c2d539a4", 00:14:33.731 "is_configured": false, 00:14:33.731 "data_offset": 0, 00:14:33.731 "data_size": 63488 00:14:33.731 }, 00:14:33.731 { 00:14:33.731 "name": "BaseBdev3", 00:14:33.731 "uuid": "b3d271cf-9e04-445a-8688-b98c57fc598c", 00:14:33.731 "is_configured": true, 00:14:33.731 "data_offset": 2048, 00:14:33.731 "data_size": 63488 00:14:33.731 } 00:14:33.731 ] 00:14:33.731 }' 00:14:33.731 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.731 10:25:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.990 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.990 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:33.990 10:25:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.990 10:25:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.990 10:25:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.990 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:33.990 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:33.990 10:25:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.990 10:25:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.990 [2024-11-19 10:25:47.743764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:33.990 BaseBdev1 00:14:33.990 10:25:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.990 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:33.990 10:25:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:33.990 10:25:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:33.990 10:25:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:33.990 10:25:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:33.990 10:25:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:33.990 10:25:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:33.990 10:25:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.990 10:25:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.990 10:25:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.990 10:25:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:33.990 10:25:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.990 10:25:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.990 [ 00:14:33.990 { 00:14:33.990 "name": "BaseBdev1", 00:14:33.990 "aliases": [ 00:14:34.250 "10ba114f-230f-422e-838a-0226eab201e2" 00:14:34.250 ], 00:14:34.250 "product_name": "Malloc disk", 00:14:34.250 "block_size": 512, 00:14:34.250 "num_blocks": 65536, 00:14:34.250 "uuid": "10ba114f-230f-422e-838a-0226eab201e2", 00:14:34.250 "assigned_rate_limits": { 00:14:34.250 "rw_ios_per_sec": 0, 00:14:34.250 "rw_mbytes_per_sec": 0, 00:14:34.250 "r_mbytes_per_sec": 0, 00:14:34.250 "w_mbytes_per_sec": 0 00:14:34.250 }, 00:14:34.250 "claimed": true, 00:14:34.250 "claim_type": "exclusive_write", 00:14:34.250 "zoned": false, 00:14:34.250 "supported_io_types": { 00:14:34.250 "read": true, 00:14:34.250 "write": true, 00:14:34.250 "unmap": true, 00:14:34.250 "flush": true, 00:14:34.250 "reset": true, 00:14:34.250 "nvme_admin": false, 00:14:34.250 "nvme_io": false, 00:14:34.250 "nvme_io_md": false, 00:14:34.250 "write_zeroes": true, 00:14:34.250 "zcopy": true, 00:14:34.250 "get_zone_info": false, 00:14:34.250 "zone_management": false, 00:14:34.250 "zone_append": false, 00:14:34.250 "compare": false, 00:14:34.250 "compare_and_write": false, 00:14:34.250 "abort": true, 00:14:34.250 "seek_hole": false, 00:14:34.250 "seek_data": false, 00:14:34.250 "copy": true, 00:14:34.250 "nvme_iov_md": false 00:14:34.250 }, 00:14:34.250 "memory_domains": [ 00:14:34.250 { 00:14:34.250 "dma_device_id": "system", 00:14:34.250 "dma_device_type": 1 00:14:34.250 }, 00:14:34.250 { 00:14:34.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:34.250 "dma_device_type": 2 00:14:34.250 } 00:14:34.250 ], 00:14:34.250 "driver_specific": {} 00:14:34.250 } 00:14:34.250 ] 00:14:34.250 10:25:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.251 10:25:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:34.251 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:34.251 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.251 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:34.251 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.251 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.251 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.251 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.251 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.251 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.251 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.251 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.251 10:25:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.251 10:25:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.251 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.251 10:25:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.251 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.251 "name": "Existed_Raid", 00:14:34.251 "uuid": "8201d8dd-567f-45a5-b2ca-5ef1e96fd9a6", 00:14:34.251 "strip_size_kb": 64, 00:14:34.251 "state": "configuring", 00:14:34.251 "raid_level": "raid5f", 00:14:34.251 "superblock": true, 00:14:34.251 "num_base_bdevs": 3, 00:14:34.251 "num_base_bdevs_discovered": 2, 00:14:34.251 "num_base_bdevs_operational": 3, 00:14:34.251 "base_bdevs_list": [ 00:14:34.251 { 00:14:34.251 "name": "BaseBdev1", 00:14:34.251 "uuid": "10ba114f-230f-422e-838a-0226eab201e2", 00:14:34.251 "is_configured": true, 00:14:34.251 "data_offset": 2048, 00:14:34.251 "data_size": 63488 00:14:34.251 }, 00:14:34.251 { 00:14:34.251 "name": null, 00:14:34.251 "uuid": "246002b2-72be-4009-be24-d023c2d539a4", 00:14:34.251 "is_configured": false, 00:14:34.251 "data_offset": 0, 00:14:34.251 "data_size": 63488 00:14:34.251 }, 00:14:34.251 { 00:14:34.251 "name": "BaseBdev3", 00:14:34.251 "uuid": "b3d271cf-9e04-445a-8688-b98c57fc598c", 00:14:34.251 "is_configured": true, 00:14:34.251 "data_offset": 2048, 00:14:34.251 "data_size": 63488 00:14:34.251 } 00:14:34.251 ] 00:14:34.251 }' 00:14:34.251 10:25:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.251 10:25:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.510 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.510 10:25:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.510 10:25:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.510 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:34.510 10:25:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.510 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:34.510 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:34.510 10:25:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.510 10:25:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.510 [2024-11-19 10:25:48.254961] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:34.510 10:25:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.510 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:34.510 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:34.510 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:34.510 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.510 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.510 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.510 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.510 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.510 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.510 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.510 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.510 10:25:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.510 10:25:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.510 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:34.510 10:25:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.768 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.768 "name": "Existed_Raid", 00:14:34.768 "uuid": "8201d8dd-567f-45a5-b2ca-5ef1e96fd9a6", 00:14:34.768 "strip_size_kb": 64, 00:14:34.768 "state": "configuring", 00:14:34.768 "raid_level": "raid5f", 00:14:34.768 "superblock": true, 00:14:34.768 "num_base_bdevs": 3, 00:14:34.768 "num_base_bdevs_discovered": 1, 00:14:34.768 "num_base_bdevs_operational": 3, 00:14:34.768 "base_bdevs_list": [ 00:14:34.768 { 00:14:34.768 "name": "BaseBdev1", 00:14:34.768 "uuid": "10ba114f-230f-422e-838a-0226eab201e2", 00:14:34.768 "is_configured": true, 00:14:34.768 "data_offset": 2048, 00:14:34.768 "data_size": 63488 00:14:34.768 }, 00:14:34.768 { 00:14:34.768 "name": null, 00:14:34.768 "uuid": "246002b2-72be-4009-be24-d023c2d539a4", 00:14:34.768 "is_configured": false, 00:14:34.768 "data_offset": 0, 00:14:34.768 "data_size": 63488 00:14:34.768 }, 00:14:34.768 { 00:14:34.768 "name": null, 00:14:34.768 "uuid": "b3d271cf-9e04-445a-8688-b98c57fc598c", 00:14:34.768 "is_configured": false, 00:14:34.768 "data_offset": 0, 00:14:34.768 "data_size": 63488 00:14:34.768 } 00:14:34.768 ] 00:14:34.768 }' 00:14:34.768 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.768 10:25:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.027 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.027 10:25:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.027 10:25:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.027 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:35.027 10:25:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.027 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:35.027 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:35.027 10:25:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.027 10:25:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.027 [2024-11-19 10:25:48.726158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:35.027 10:25:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.027 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:35.027 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.027 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.027 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.027 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.027 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:35.027 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.027 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.027 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.027 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.027 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.027 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.027 10:25:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.027 10:25:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.027 10:25:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.027 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.027 "name": "Existed_Raid", 00:14:35.027 "uuid": "8201d8dd-567f-45a5-b2ca-5ef1e96fd9a6", 00:14:35.027 "strip_size_kb": 64, 00:14:35.027 "state": "configuring", 00:14:35.027 "raid_level": "raid5f", 00:14:35.027 "superblock": true, 00:14:35.027 "num_base_bdevs": 3, 00:14:35.027 "num_base_bdevs_discovered": 2, 00:14:35.027 "num_base_bdevs_operational": 3, 00:14:35.027 "base_bdevs_list": [ 00:14:35.027 { 00:14:35.027 "name": "BaseBdev1", 00:14:35.027 "uuid": "10ba114f-230f-422e-838a-0226eab201e2", 00:14:35.027 "is_configured": true, 00:14:35.027 "data_offset": 2048, 00:14:35.027 "data_size": 63488 00:14:35.027 }, 00:14:35.027 { 00:14:35.027 "name": null, 00:14:35.027 "uuid": "246002b2-72be-4009-be24-d023c2d539a4", 00:14:35.027 "is_configured": false, 00:14:35.027 "data_offset": 0, 00:14:35.027 "data_size": 63488 00:14:35.027 }, 00:14:35.027 { 00:14:35.027 "name": "BaseBdev3", 00:14:35.027 "uuid": "b3d271cf-9e04-445a-8688-b98c57fc598c", 00:14:35.027 "is_configured": true, 00:14:35.027 "data_offset": 2048, 00:14:35.027 "data_size": 63488 00:14:35.027 } 00:14:35.027 ] 00:14:35.027 }' 00:14:35.027 10:25:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.027 10:25:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.597 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.597 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:35.597 10:25:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.597 10:25:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.597 10:25:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.597 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:35.597 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:35.597 10:25:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.597 10:25:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.597 [2024-11-19 10:25:49.205351] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:35.597 10:25:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.597 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:35.597 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:35.597 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:35.597 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:35.597 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:35.597 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:35.597 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.597 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.597 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.597 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.597 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.597 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.597 10:25:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.597 10:25:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.597 10:25:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.597 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.597 "name": "Existed_Raid", 00:14:35.597 "uuid": "8201d8dd-567f-45a5-b2ca-5ef1e96fd9a6", 00:14:35.597 "strip_size_kb": 64, 00:14:35.597 "state": "configuring", 00:14:35.597 "raid_level": "raid5f", 00:14:35.597 "superblock": true, 00:14:35.597 "num_base_bdevs": 3, 00:14:35.597 "num_base_bdevs_discovered": 1, 00:14:35.597 "num_base_bdevs_operational": 3, 00:14:35.597 "base_bdevs_list": [ 00:14:35.597 { 00:14:35.597 "name": null, 00:14:35.597 "uuid": "10ba114f-230f-422e-838a-0226eab201e2", 00:14:35.597 "is_configured": false, 00:14:35.597 "data_offset": 0, 00:14:35.597 "data_size": 63488 00:14:35.597 }, 00:14:35.597 { 00:14:35.597 "name": null, 00:14:35.597 "uuid": "246002b2-72be-4009-be24-d023c2d539a4", 00:14:35.597 "is_configured": false, 00:14:35.597 "data_offset": 0, 00:14:35.597 "data_size": 63488 00:14:35.597 }, 00:14:35.597 { 00:14:35.597 "name": "BaseBdev3", 00:14:35.598 "uuid": "b3d271cf-9e04-445a-8688-b98c57fc598c", 00:14:35.598 "is_configured": true, 00:14:35.598 "data_offset": 2048, 00:14:35.598 "data_size": 63488 00:14:35.598 } 00:14:35.598 ] 00:14:35.598 }' 00:14:35.598 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.598 10:25:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.166 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.166 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:36.166 10:25:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.166 10:25:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.166 10:25:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.166 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:36.166 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:36.166 10:25:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.166 10:25:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.166 [2024-11-19 10:25:49.756158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:36.166 10:25:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.167 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:36.167 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.167 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:36.167 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.167 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.167 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.167 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.167 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.167 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.167 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.167 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.167 10:25:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.167 10:25:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.167 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.167 10:25:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.167 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.167 "name": "Existed_Raid", 00:14:36.167 "uuid": "8201d8dd-567f-45a5-b2ca-5ef1e96fd9a6", 00:14:36.167 "strip_size_kb": 64, 00:14:36.167 "state": "configuring", 00:14:36.167 "raid_level": "raid5f", 00:14:36.167 "superblock": true, 00:14:36.167 "num_base_bdevs": 3, 00:14:36.167 "num_base_bdevs_discovered": 2, 00:14:36.167 "num_base_bdevs_operational": 3, 00:14:36.167 "base_bdevs_list": [ 00:14:36.167 { 00:14:36.167 "name": null, 00:14:36.167 "uuid": "10ba114f-230f-422e-838a-0226eab201e2", 00:14:36.167 "is_configured": false, 00:14:36.167 "data_offset": 0, 00:14:36.167 "data_size": 63488 00:14:36.167 }, 00:14:36.167 { 00:14:36.167 "name": "BaseBdev2", 00:14:36.167 "uuid": "246002b2-72be-4009-be24-d023c2d539a4", 00:14:36.167 "is_configured": true, 00:14:36.167 "data_offset": 2048, 00:14:36.167 "data_size": 63488 00:14:36.167 }, 00:14:36.167 { 00:14:36.167 "name": "BaseBdev3", 00:14:36.167 "uuid": "b3d271cf-9e04-445a-8688-b98c57fc598c", 00:14:36.167 "is_configured": true, 00:14:36.167 "data_offset": 2048, 00:14:36.167 "data_size": 63488 00:14:36.167 } 00:14:36.167 ] 00:14:36.167 }' 00:14:36.167 10:25:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.167 10:25:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.426 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.426 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.426 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.426 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:36.426 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.685 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:36.685 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.685 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.685 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.685 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:36.685 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.685 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 10ba114f-230f-422e-838a-0226eab201e2 00:14:36.685 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.685 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.685 [2024-11-19 10:25:50.302803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:36.685 [2024-11-19 10:25:50.303111] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:36.685 [2024-11-19 10:25:50.303165] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:36.685 [2024-11-19 10:25:50.303432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:36.685 NewBaseBdev 00:14:36.685 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.685 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:36.685 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:36.685 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:36.685 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:36.685 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:36.685 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:36.685 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:36.685 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.685 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.685 [2024-11-19 10:25:50.308673] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:36.685 [2024-11-19 10:25:50.308689] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:36.685 [2024-11-19 10:25:50.308832] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:36.685 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.685 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:36.685 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.685 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.685 [ 00:14:36.685 { 00:14:36.685 "name": "NewBaseBdev", 00:14:36.685 "aliases": [ 00:14:36.685 "10ba114f-230f-422e-838a-0226eab201e2" 00:14:36.685 ], 00:14:36.685 "product_name": "Malloc disk", 00:14:36.685 "block_size": 512, 00:14:36.685 "num_blocks": 65536, 00:14:36.685 "uuid": "10ba114f-230f-422e-838a-0226eab201e2", 00:14:36.685 "assigned_rate_limits": { 00:14:36.685 "rw_ios_per_sec": 0, 00:14:36.685 "rw_mbytes_per_sec": 0, 00:14:36.685 "r_mbytes_per_sec": 0, 00:14:36.685 "w_mbytes_per_sec": 0 00:14:36.685 }, 00:14:36.685 "claimed": true, 00:14:36.685 "claim_type": "exclusive_write", 00:14:36.685 "zoned": false, 00:14:36.685 "supported_io_types": { 00:14:36.685 "read": true, 00:14:36.685 "write": true, 00:14:36.685 "unmap": true, 00:14:36.685 "flush": true, 00:14:36.685 "reset": true, 00:14:36.685 "nvme_admin": false, 00:14:36.685 "nvme_io": false, 00:14:36.685 "nvme_io_md": false, 00:14:36.685 "write_zeroes": true, 00:14:36.685 "zcopy": true, 00:14:36.685 "get_zone_info": false, 00:14:36.685 "zone_management": false, 00:14:36.685 "zone_append": false, 00:14:36.686 "compare": false, 00:14:36.686 "compare_and_write": false, 00:14:36.686 "abort": true, 00:14:36.686 "seek_hole": false, 00:14:36.686 "seek_data": false, 00:14:36.686 "copy": true, 00:14:36.686 "nvme_iov_md": false 00:14:36.686 }, 00:14:36.686 "memory_domains": [ 00:14:36.686 { 00:14:36.686 "dma_device_id": "system", 00:14:36.686 "dma_device_type": 1 00:14:36.686 }, 00:14:36.686 { 00:14:36.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:36.686 "dma_device_type": 2 00:14:36.686 } 00:14:36.686 ], 00:14:36.686 "driver_specific": {} 00:14:36.686 } 00:14:36.686 ] 00:14:36.686 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.686 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:36.686 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:36.686 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:36.686 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:36.686 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:36.686 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:36.686 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:36.686 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:36.686 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:36.686 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:36.686 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:36.686 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.686 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.686 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.686 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.686 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.686 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:36.686 "name": "Existed_Raid", 00:14:36.686 "uuid": "8201d8dd-567f-45a5-b2ca-5ef1e96fd9a6", 00:14:36.686 "strip_size_kb": 64, 00:14:36.686 "state": "online", 00:14:36.686 "raid_level": "raid5f", 00:14:36.686 "superblock": true, 00:14:36.686 "num_base_bdevs": 3, 00:14:36.686 "num_base_bdevs_discovered": 3, 00:14:36.686 "num_base_bdevs_operational": 3, 00:14:36.686 "base_bdevs_list": [ 00:14:36.686 { 00:14:36.686 "name": "NewBaseBdev", 00:14:36.686 "uuid": "10ba114f-230f-422e-838a-0226eab201e2", 00:14:36.686 "is_configured": true, 00:14:36.686 "data_offset": 2048, 00:14:36.686 "data_size": 63488 00:14:36.686 }, 00:14:36.686 { 00:14:36.686 "name": "BaseBdev2", 00:14:36.686 "uuid": "246002b2-72be-4009-be24-d023c2d539a4", 00:14:36.686 "is_configured": true, 00:14:36.686 "data_offset": 2048, 00:14:36.686 "data_size": 63488 00:14:36.686 }, 00:14:36.686 { 00:14:36.686 "name": "BaseBdev3", 00:14:36.686 "uuid": "b3d271cf-9e04-445a-8688-b98c57fc598c", 00:14:36.686 "is_configured": true, 00:14:36.686 "data_offset": 2048, 00:14:36.686 "data_size": 63488 00:14:36.686 } 00:14:36.686 ] 00:14:36.686 }' 00:14:36.686 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:36.686 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.254 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:37.254 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:37.254 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:37.254 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:37.254 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:37.254 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:37.254 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:37.254 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.254 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.254 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:37.254 [2024-11-19 10:25:50.790182] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:37.254 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.254 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:37.254 "name": "Existed_Raid", 00:14:37.254 "aliases": [ 00:14:37.254 "8201d8dd-567f-45a5-b2ca-5ef1e96fd9a6" 00:14:37.254 ], 00:14:37.254 "product_name": "Raid Volume", 00:14:37.254 "block_size": 512, 00:14:37.254 "num_blocks": 126976, 00:14:37.254 "uuid": "8201d8dd-567f-45a5-b2ca-5ef1e96fd9a6", 00:14:37.254 "assigned_rate_limits": { 00:14:37.254 "rw_ios_per_sec": 0, 00:14:37.254 "rw_mbytes_per_sec": 0, 00:14:37.255 "r_mbytes_per_sec": 0, 00:14:37.255 "w_mbytes_per_sec": 0 00:14:37.255 }, 00:14:37.255 "claimed": false, 00:14:37.255 "zoned": false, 00:14:37.255 "supported_io_types": { 00:14:37.255 "read": true, 00:14:37.255 "write": true, 00:14:37.255 "unmap": false, 00:14:37.255 "flush": false, 00:14:37.255 "reset": true, 00:14:37.255 "nvme_admin": false, 00:14:37.255 "nvme_io": false, 00:14:37.255 "nvme_io_md": false, 00:14:37.255 "write_zeroes": true, 00:14:37.255 "zcopy": false, 00:14:37.255 "get_zone_info": false, 00:14:37.255 "zone_management": false, 00:14:37.255 "zone_append": false, 00:14:37.255 "compare": false, 00:14:37.255 "compare_and_write": false, 00:14:37.255 "abort": false, 00:14:37.255 "seek_hole": false, 00:14:37.255 "seek_data": false, 00:14:37.255 "copy": false, 00:14:37.255 "nvme_iov_md": false 00:14:37.255 }, 00:14:37.255 "driver_specific": { 00:14:37.255 "raid": { 00:14:37.255 "uuid": "8201d8dd-567f-45a5-b2ca-5ef1e96fd9a6", 00:14:37.255 "strip_size_kb": 64, 00:14:37.255 "state": "online", 00:14:37.255 "raid_level": "raid5f", 00:14:37.255 "superblock": true, 00:14:37.255 "num_base_bdevs": 3, 00:14:37.255 "num_base_bdevs_discovered": 3, 00:14:37.255 "num_base_bdevs_operational": 3, 00:14:37.255 "base_bdevs_list": [ 00:14:37.255 { 00:14:37.255 "name": "NewBaseBdev", 00:14:37.255 "uuid": "10ba114f-230f-422e-838a-0226eab201e2", 00:14:37.255 "is_configured": true, 00:14:37.255 "data_offset": 2048, 00:14:37.255 "data_size": 63488 00:14:37.255 }, 00:14:37.255 { 00:14:37.255 "name": "BaseBdev2", 00:14:37.255 "uuid": "246002b2-72be-4009-be24-d023c2d539a4", 00:14:37.255 "is_configured": true, 00:14:37.255 "data_offset": 2048, 00:14:37.255 "data_size": 63488 00:14:37.255 }, 00:14:37.255 { 00:14:37.255 "name": "BaseBdev3", 00:14:37.255 "uuid": "b3d271cf-9e04-445a-8688-b98c57fc598c", 00:14:37.255 "is_configured": true, 00:14:37.255 "data_offset": 2048, 00:14:37.255 "data_size": 63488 00:14:37.255 } 00:14:37.255 ] 00:14:37.255 } 00:14:37.255 } 00:14:37.255 }' 00:14:37.255 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:37.255 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:37.255 BaseBdev2 00:14:37.255 BaseBdev3' 00:14:37.255 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.255 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:37.255 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.255 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:37.255 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.255 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.255 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.255 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.255 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:37.255 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:37.255 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.255 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.255 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:37.255 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.255 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.255 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.255 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:37.255 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:37.255 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:37.255 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:37.255 10:25:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:37.255 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.255 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.255 10:25:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.255 10:25:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:37.255 10:25:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:37.255 10:25:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:37.255 10:25:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.255 10:25:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.255 [2024-11-19 10:25:51.029583] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:37.255 [2024-11-19 10:25:51.029649] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:37.255 [2024-11-19 10:25:51.029731] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:37.255 [2024-11-19 10:25:51.030030] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:37.255 [2024-11-19 10:25:51.030086] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:37.514 10:25:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.514 10:25:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80210 00:14:37.514 10:25:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80210 ']' 00:14:37.514 10:25:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80210 00:14:37.514 10:25:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:37.514 10:25:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:37.514 10:25:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80210 00:14:37.514 10:25:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:37.514 10:25:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:37.514 killing process with pid 80210 00:14:37.514 10:25:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80210' 00:14:37.514 10:25:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80210 00:14:37.514 [2024-11-19 10:25:51.067046] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:37.514 10:25:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80210 00:14:37.773 [2024-11-19 10:25:51.346643] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:38.711 10:25:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:38.711 00:14:38.711 real 0m10.242s 00:14:38.711 user 0m16.296s 00:14:38.711 sys 0m1.844s 00:14:38.711 10:25:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:38.711 10:25:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.711 ************************************ 00:14:38.711 END TEST raid5f_state_function_test_sb 00:14:38.711 ************************************ 00:14:38.711 10:25:52 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:14:38.711 10:25:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:38.711 10:25:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:38.711 10:25:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:38.711 ************************************ 00:14:38.711 START TEST raid5f_superblock_test 00:14:38.711 ************************************ 00:14:38.711 10:25:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:14:38.711 10:25:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:38.711 10:25:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:38.711 10:25:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:38.711 10:25:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:38.711 10:25:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:38.711 10:25:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:38.711 10:25:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:38.711 10:25:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:38.711 10:25:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:38.711 10:25:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:38.711 10:25:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:38.711 10:25:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:38.711 10:25:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:38.711 10:25:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:38.711 10:25:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:38.711 10:25:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:38.711 10:25:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=80829 00:14:38.711 10:25:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:38.711 10:25:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 80829 00:14:38.711 10:25:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 80829 ']' 00:14:38.711 10:25:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.711 10:25:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:38.711 10:25:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.711 10:25:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:38.711 10:25:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.970 [2024-11-19 10:25:52.530153] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:14:38.970 [2024-11-19 10:25:52.530369] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80829 ] 00:14:38.970 [2024-11-19 10:25:52.706682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.230 [2024-11-19 10:25:52.811456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.230 [2024-11-19 10:25:52.994954] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:39.230 [2024-11-19 10:25:52.995014] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:39.798 10:25:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:39.798 10:25:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:39.798 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:39.798 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:39.798 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:39.798 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:39.798 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:39.798 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.799 malloc1 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.799 [2024-11-19 10:25:53.388628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:39.799 [2024-11-19 10:25:53.388749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.799 [2024-11-19 10:25:53.388792] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:39.799 [2024-11-19 10:25:53.388821] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.799 [2024-11-19 10:25:53.390805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.799 [2024-11-19 10:25:53.390888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:39.799 pt1 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.799 malloc2 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.799 [2024-11-19 10:25:53.444971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:39.799 [2024-11-19 10:25:53.445038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.799 [2024-11-19 10:25:53.445077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:39.799 [2024-11-19 10:25:53.445085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.799 [2024-11-19 10:25:53.447052] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.799 [2024-11-19 10:25:53.447131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:39.799 pt2 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.799 malloc3 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.799 [2024-11-19 10:25:53.523802] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:39.799 [2024-11-19 10:25:53.523907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:39.799 [2024-11-19 10:25:53.523950] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:39.799 [2024-11-19 10:25:53.523979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:39.799 [2024-11-19 10:25:53.525954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:39.799 [2024-11-19 10:25:53.526031] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:39.799 pt3 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.799 [2024-11-19 10:25:53.535835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:39.799 [2024-11-19 10:25:53.537566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:39.799 [2024-11-19 10:25:53.537680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:39.799 [2024-11-19 10:25:53.537851] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:39.799 [2024-11-19 10:25:53.537904] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:39.799 [2024-11-19 10:25:53.538158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:39.799 [2024-11-19 10:25:53.543887] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:39.799 [2024-11-19 10:25:53.543939] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:39.799 [2024-11-19 10:25:53.544172] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.799 10:25:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.059 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.059 "name": "raid_bdev1", 00:14:40.059 "uuid": "947df1df-e1ff-4018-b154-529cd944c21f", 00:14:40.059 "strip_size_kb": 64, 00:14:40.059 "state": "online", 00:14:40.059 "raid_level": "raid5f", 00:14:40.059 "superblock": true, 00:14:40.059 "num_base_bdevs": 3, 00:14:40.059 "num_base_bdevs_discovered": 3, 00:14:40.059 "num_base_bdevs_operational": 3, 00:14:40.059 "base_bdevs_list": [ 00:14:40.059 { 00:14:40.059 "name": "pt1", 00:14:40.059 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:40.059 "is_configured": true, 00:14:40.059 "data_offset": 2048, 00:14:40.059 "data_size": 63488 00:14:40.059 }, 00:14:40.059 { 00:14:40.059 "name": "pt2", 00:14:40.059 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:40.059 "is_configured": true, 00:14:40.059 "data_offset": 2048, 00:14:40.059 "data_size": 63488 00:14:40.059 }, 00:14:40.059 { 00:14:40.059 "name": "pt3", 00:14:40.059 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:40.059 "is_configured": true, 00:14:40.059 "data_offset": 2048, 00:14:40.059 "data_size": 63488 00:14:40.059 } 00:14:40.059 ] 00:14:40.059 }' 00:14:40.059 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.059 10:25:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.319 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:40.319 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:40.319 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:40.319 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:40.319 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:40.319 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:40.319 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:40.319 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:40.319 10:25:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.319 10:25:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.319 [2024-11-19 10:25:53.970075] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:40.319 10:25:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.319 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:40.319 "name": "raid_bdev1", 00:14:40.319 "aliases": [ 00:14:40.319 "947df1df-e1ff-4018-b154-529cd944c21f" 00:14:40.319 ], 00:14:40.319 "product_name": "Raid Volume", 00:14:40.319 "block_size": 512, 00:14:40.319 "num_blocks": 126976, 00:14:40.319 "uuid": "947df1df-e1ff-4018-b154-529cd944c21f", 00:14:40.319 "assigned_rate_limits": { 00:14:40.319 "rw_ios_per_sec": 0, 00:14:40.319 "rw_mbytes_per_sec": 0, 00:14:40.319 "r_mbytes_per_sec": 0, 00:14:40.319 "w_mbytes_per_sec": 0 00:14:40.319 }, 00:14:40.319 "claimed": false, 00:14:40.319 "zoned": false, 00:14:40.319 "supported_io_types": { 00:14:40.319 "read": true, 00:14:40.319 "write": true, 00:14:40.319 "unmap": false, 00:14:40.319 "flush": false, 00:14:40.319 "reset": true, 00:14:40.319 "nvme_admin": false, 00:14:40.319 "nvme_io": false, 00:14:40.319 "nvme_io_md": false, 00:14:40.319 "write_zeroes": true, 00:14:40.319 "zcopy": false, 00:14:40.319 "get_zone_info": false, 00:14:40.319 "zone_management": false, 00:14:40.319 "zone_append": false, 00:14:40.319 "compare": false, 00:14:40.319 "compare_and_write": false, 00:14:40.319 "abort": false, 00:14:40.319 "seek_hole": false, 00:14:40.319 "seek_data": false, 00:14:40.319 "copy": false, 00:14:40.319 "nvme_iov_md": false 00:14:40.319 }, 00:14:40.319 "driver_specific": { 00:14:40.319 "raid": { 00:14:40.319 "uuid": "947df1df-e1ff-4018-b154-529cd944c21f", 00:14:40.319 "strip_size_kb": 64, 00:14:40.319 "state": "online", 00:14:40.319 "raid_level": "raid5f", 00:14:40.319 "superblock": true, 00:14:40.319 "num_base_bdevs": 3, 00:14:40.319 "num_base_bdevs_discovered": 3, 00:14:40.319 "num_base_bdevs_operational": 3, 00:14:40.319 "base_bdevs_list": [ 00:14:40.319 { 00:14:40.319 "name": "pt1", 00:14:40.319 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:40.319 "is_configured": true, 00:14:40.319 "data_offset": 2048, 00:14:40.319 "data_size": 63488 00:14:40.319 }, 00:14:40.319 { 00:14:40.319 "name": "pt2", 00:14:40.319 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:40.319 "is_configured": true, 00:14:40.319 "data_offset": 2048, 00:14:40.319 "data_size": 63488 00:14:40.319 }, 00:14:40.319 { 00:14:40.319 "name": "pt3", 00:14:40.319 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:40.319 "is_configured": true, 00:14:40.319 "data_offset": 2048, 00:14:40.319 "data_size": 63488 00:14:40.319 } 00:14:40.319 ] 00:14:40.319 } 00:14:40.319 } 00:14:40.319 }' 00:14:40.319 10:25:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:40.319 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:40.319 pt2 00:14:40.319 pt3' 00:14:40.319 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:40.319 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:40.319 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:40.319 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:40.319 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.319 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.319 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.615 [2024-11-19 10:25:54.237558] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=947df1df-e1ff-4018-b154-529cd944c21f 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 947df1df-e1ff-4018-b154-529cd944c21f ']' 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.615 [2024-11-19 10:25:54.265349] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:40.615 [2024-11-19 10:25:54.265373] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:40.615 [2024-11-19 10:25:54.265434] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:40.615 [2024-11-19 10:25:54.265501] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:40.615 [2024-11-19 10:25:54.265510] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.615 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.616 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:40.616 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:40.616 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.616 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.616 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.616 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:40.616 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:40.616 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.616 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.616 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.616 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:40.616 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:40.616 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.616 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.899 [2024-11-19 10:25:54.405156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:40.899 [2024-11-19 10:25:54.406884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:40.899 [2024-11-19 10:25:54.406948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:40.899 [2024-11-19 10:25:54.407004] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:40.899 [2024-11-19 10:25:54.407042] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:40.899 [2024-11-19 10:25:54.407059] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:40.899 [2024-11-19 10:25:54.407074] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:40.899 [2024-11-19 10:25:54.407098] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:40.899 request: 00:14:40.899 { 00:14:40.899 "name": "raid_bdev1", 00:14:40.899 "raid_level": "raid5f", 00:14:40.899 "base_bdevs": [ 00:14:40.899 "malloc1", 00:14:40.899 "malloc2", 00:14:40.899 "malloc3" 00:14:40.899 ], 00:14:40.899 "strip_size_kb": 64, 00:14:40.899 "superblock": false, 00:14:40.899 "method": "bdev_raid_create", 00:14:40.899 "req_id": 1 00:14:40.899 } 00:14:40.899 Got JSON-RPC error response 00:14:40.899 response: 00:14:40.899 { 00:14:40.899 "code": -17, 00:14:40.899 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:40.899 } 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.899 [2024-11-19 10:25:54.469023] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:40.899 [2024-11-19 10:25:54.469061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.899 [2024-11-19 10:25:54.469077] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:40.899 [2024-11-19 10:25:54.469085] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.899 [2024-11-19 10:25:54.471056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.899 [2024-11-19 10:25:54.471086] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:40.899 [2024-11-19 10:25:54.471150] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:40.899 [2024-11-19 10:25:54.471191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:40.899 pt1 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.899 "name": "raid_bdev1", 00:14:40.899 "uuid": "947df1df-e1ff-4018-b154-529cd944c21f", 00:14:40.899 "strip_size_kb": 64, 00:14:40.899 "state": "configuring", 00:14:40.899 "raid_level": "raid5f", 00:14:40.899 "superblock": true, 00:14:40.899 "num_base_bdevs": 3, 00:14:40.899 "num_base_bdevs_discovered": 1, 00:14:40.899 "num_base_bdevs_operational": 3, 00:14:40.899 "base_bdevs_list": [ 00:14:40.899 { 00:14:40.899 "name": "pt1", 00:14:40.899 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:40.899 "is_configured": true, 00:14:40.899 "data_offset": 2048, 00:14:40.899 "data_size": 63488 00:14:40.899 }, 00:14:40.899 { 00:14:40.899 "name": null, 00:14:40.899 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:40.899 "is_configured": false, 00:14:40.899 "data_offset": 2048, 00:14:40.899 "data_size": 63488 00:14:40.899 }, 00:14:40.899 { 00:14:40.899 "name": null, 00:14:40.899 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:40.899 "is_configured": false, 00:14:40.899 "data_offset": 2048, 00:14:40.899 "data_size": 63488 00:14:40.899 } 00:14:40.899 ] 00:14:40.899 }' 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.899 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.158 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:41.158 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:41.158 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.158 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.158 [2024-11-19 10:25:54.908275] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:41.158 [2024-11-19 10:25:54.908327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.159 [2024-11-19 10:25:54.908348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:41.159 [2024-11-19 10:25:54.908356] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.159 [2024-11-19 10:25:54.908755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.159 [2024-11-19 10:25:54.908777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:41.159 [2024-11-19 10:25:54.908853] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:41.159 [2024-11-19 10:25:54.908872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:41.159 pt2 00:14:41.159 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.159 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:41.159 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.159 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.159 [2024-11-19 10:25:54.920257] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:41.159 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.159 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:41.159 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.159 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:41.159 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.159 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.159 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.159 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.159 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.159 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.159 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.159 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.159 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.159 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.159 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.418 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.418 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.418 "name": "raid_bdev1", 00:14:41.418 "uuid": "947df1df-e1ff-4018-b154-529cd944c21f", 00:14:41.418 "strip_size_kb": 64, 00:14:41.418 "state": "configuring", 00:14:41.418 "raid_level": "raid5f", 00:14:41.418 "superblock": true, 00:14:41.418 "num_base_bdevs": 3, 00:14:41.418 "num_base_bdevs_discovered": 1, 00:14:41.418 "num_base_bdevs_operational": 3, 00:14:41.418 "base_bdevs_list": [ 00:14:41.418 { 00:14:41.418 "name": "pt1", 00:14:41.418 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:41.418 "is_configured": true, 00:14:41.418 "data_offset": 2048, 00:14:41.418 "data_size": 63488 00:14:41.418 }, 00:14:41.418 { 00:14:41.418 "name": null, 00:14:41.418 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:41.418 "is_configured": false, 00:14:41.418 "data_offset": 0, 00:14:41.418 "data_size": 63488 00:14:41.418 }, 00:14:41.418 { 00:14:41.418 "name": null, 00:14:41.418 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:41.418 "is_configured": false, 00:14:41.418 "data_offset": 2048, 00:14:41.418 "data_size": 63488 00:14:41.418 } 00:14:41.418 ] 00:14:41.418 }' 00:14:41.418 10:25:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.418 10:25:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.677 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:41.677 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:41.677 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:41.677 10:25:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.677 10:25:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.677 [2024-11-19 10:25:55.299573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:41.677 [2024-11-19 10:25:55.299634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.677 [2024-11-19 10:25:55.299649] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:41.677 [2024-11-19 10:25:55.299659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.677 [2024-11-19 10:25:55.300057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.677 [2024-11-19 10:25:55.300082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:41.677 [2024-11-19 10:25:55.300151] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:41.677 [2024-11-19 10:25:55.300172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:41.677 pt2 00:14:41.677 10:25:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.677 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:41.677 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:41.677 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:41.677 10:25:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.677 10:25:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.677 [2024-11-19 10:25:55.311552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:41.677 [2024-11-19 10:25:55.311592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:41.677 [2024-11-19 10:25:55.311603] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:41.677 [2024-11-19 10:25:55.311612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:41.677 [2024-11-19 10:25:55.311940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:41.677 [2024-11-19 10:25:55.311959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:41.677 [2024-11-19 10:25:55.312026] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:41.677 [2024-11-19 10:25:55.312046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:41.677 [2024-11-19 10:25:55.312188] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:41.677 [2024-11-19 10:25:55.312198] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:41.677 [2024-11-19 10:25:55.312418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:41.677 [2024-11-19 10:25:55.317491] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:41.677 [2024-11-19 10:25:55.317515] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:41.677 [2024-11-19 10:25:55.317690] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.677 pt3 00:14:41.677 10:25:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.677 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:41.677 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:41.677 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:41.677 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.677 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.677 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:41.677 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:41.677 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:41.677 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.677 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.677 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.677 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.677 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.678 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.678 10:25:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.678 10:25:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.678 10:25:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.678 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.678 "name": "raid_bdev1", 00:14:41.678 "uuid": "947df1df-e1ff-4018-b154-529cd944c21f", 00:14:41.678 "strip_size_kb": 64, 00:14:41.678 "state": "online", 00:14:41.678 "raid_level": "raid5f", 00:14:41.678 "superblock": true, 00:14:41.678 "num_base_bdevs": 3, 00:14:41.678 "num_base_bdevs_discovered": 3, 00:14:41.678 "num_base_bdevs_operational": 3, 00:14:41.678 "base_bdevs_list": [ 00:14:41.678 { 00:14:41.678 "name": "pt1", 00:14:41.678 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:41.678 "is_configured": true, 00:14:41.678 "data_offset": 2048, 00:14:41.678 "data_size": 63488 00:14:41.678 }, 00:14:41.678 { 00:14:41.678 "name": "pt2", 00:14:41.678 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:41.678 "is_configured": true, 00:14:41.678 "data_offset": 2048, 00:14:41.678 "data_size": 63488 00:14:41.678 }, 00:14:41.678 { 00:14:41.678 "name": "pt3", 00:14:41.678 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:41.678 "is_configured": true, 00:14:41.678 "data_offset": 2048, 00:14:41.678 "data_size": 63488 00:14:41.678 } 00:14:41.678 ] 00:14:41.678 }' 00:14:41.678 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.678 10:25:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.246 [2024-11-19 10:25:55.735282] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:42.246 "name": "raid_bdev1", 00:14:42.246 "aliases": [ 00:14:42.246 "947df1df-e1ff-4018-b154-529cd944c21f" 00:14:42.246 ], 00:14:42.246 "product_name": "Raid Volume", 00:14:42.246 "block_size": 512, 00:14:42.246 "num_blocks": 126976, 00:14:42.246 "uuid": "947df1df-e1ff-4018-b154-529cd944c21f", 00:14:42.246 "assigned_rate_limits": { 00:14:42.246 "rw_ios_per_sec": 0, 00:14:42.246 "rw_mbytes_per_sec": 0, 00:14:42.246 "r_mbytes_per_sec": 0, 00:14:42.246 "w_mbytes_per_sec": 0 00:14:42.246 }, 00:14:42.246 "claimed": false, 00:14:42.246 "zoned": false, 00:14:42.246 "supported_io_types": { 00:14:42.246 "read": true, 00:14:42.246 "write": true, 00:14:42.246 "unmap": false, 00:14:42.246 "flush": false, 00:14:42.246 "reset": true, 00:14:42.246 "nvme_admin": false, 00:14:42.246 "nvme_io": false, 00:14:42.246 "nvme_io_md": false, 00:14:42.246 "write_zeroes": true, 00:14:42.246 "zcopy": false, 00:14:42.246 "get_zone_info": false, 00:14:42.246 "zone_management": false, 00:14:42.246 "zone_append": false, 00:14:42.246 "compare": false, 00:14:42.246 "compare_and_write": false, 00:14:42.246 "abort": false, 00:14:42.246 "seek_hole": false, 00:14:42.246 "seek_data": false, 00:14:42.246 "copy": false, 00:14:42.246 "nvme_iov_md": false 00:14:42.246 }, 00:14:42.246 "driver_specific": { 00:14:42.246 "raid": { 00:14:42.246 "uuid": "947df1df-e1ff-4018-b154-529cd944c21f", 00:14:42.246 "strip_size_kb": 64, 00:14:42.246 "state": "online", 00:14:42.246 "raid_level": "raid5f", 00:14:42.246 "superblock": true, 00:14:42.246 "num_base_bdevs": 3, 00:14:42.246 "num_base_bdevs_discovered": 3, 00:14:42.246 "num_base_bdevs_operational": 3, 00:14:42.246 "base_bdevs_list": [ 00:14:42.246 { 00:14:42.246 "name": "pt1", 00:14:42.246 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:42.246 "is_configured": true, 00:14:42.246 "data_offset": 2048, 00:14:42.246 "data_size": 63488 00:14:42.246 }, 00:14:42.246 { 00:14:42.246 "name": "pt2", 00:14:42.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:42.246 "is_configured": true, 00:14:42.246 "data_offset": 2048, 00:14:42.246 "data_size": 63488 00:14:42.246 }, 00:14:42.246 { 00:14:42.246 "name": "pt3", 00:14:42.246 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:42.246 "is_configured": true, 00:14:42.246 "data_offset": 2048, 00:14:42.246 "data_size": 63488 00:14:42.246 } 00:14:42.246 ] 00:14:42.246 } 00:14:42.246 } 00:14:42.246 }' 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:42.246 pt2 00:14:42.246 pt3' 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.246 10:25:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:42.246 [2024-11-19 10:25:56.002747] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:42.247 10:25:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.506 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 947df1df-e1ff-4018-b154-529cd944c21f '!=' 947df1df-e1ff-4018-b154-529cd944c21f ']' 00:14:42.506 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:42.506 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:42.506 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:42.506 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:42.506 10:25:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.506 10:25:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.506 [2024-11-19 10:25:56.046558] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:42.506 10:25:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.506 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:42.506 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.506 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.506 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:42.506 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:42.506 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:42.506 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.506 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.506 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.506 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.506 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.506 10:25:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.506 10:25:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.506 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.506 10:25:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.506 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.506 "name": "raid_bdev1", 00:14:42.506 "uuid": "947df1df-e1ff-4018-b154-529cd944c21f", 00:14:42.506 "strip_size_kb": 64, 00:14:42.506 "state": "online", 00:14:42.506 "raid_level": "raid5f", 00:14:42.506 "superblock": true, 00:14:42.506 "num_base_bdevs": 3, 00:14:42.506 "num_base_bdevs_discovered": 2, 00:14:42.506 "num_base_bdevs_operational": 2, 00:14:42.506 "base_bdevs_list": [ 00:14:42.506 { 00:14:42.506 "name": null, 00:14:42.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.506 "is_configured": false, 00:14:42.506 "data_offset": 0, 00:14:42.506 "data_size": 63488 00:14:42.506 }, 00:14:42.506 { 00:14:42.506 "name": "pt2", 00:14:42.506 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:42.506 "is_configured": true, 00:14:42.506 "data_offset": 2048, 00:14:42.506 "data_size": 63488 00:14:42.506 }, 00:14:42.506 { 00:14:42.506 "name": "pt3", 00:14:42.506 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:42.506 "is_configured": true, 00:14:42.506 "data_offset": 2048, 00:14:42.506 "data_size": 63488 00:14:42.506 } 00:14:42.506 ] 00:14:42.506 }' 00:14:42.506 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.506 10:25:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.765 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:42.765 10:25:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.765 10:25:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.765 [2024-11-19 10:25:56.481817] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:42.765 [2024-11-19 10:25:56.481844] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:42.765 [2024-11-19 10:25:56.481905] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:42.765 [2024-11-19 10:25:56.481954] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:42.765 [2024-11-19 10:25:56.481967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:42.765 10:25:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.765 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.765 10:25:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.765 10:25:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.765 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:42.765 10:25:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.765 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:42.765 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:42.765 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:42.765 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:42.765 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:42.765 10:25:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.765 10:25:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.765 10:25:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.765 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:42.765 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:42.765 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:42.765 10:25:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.765 10:25:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.024 10:25:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.024 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:43.024 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:43.024 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:43.024 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:43.024 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:43.024 10:25:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.024 10:25:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.024 [2024-11-19 10:25:56.549690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:43.024 [2024-11-19 10:25:56.549753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.024 [2024-11-19 10:25:56.549768] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:43.024 [2024-11-19 10:25:56.549777] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.024 [2024-11-19 10:25:56.551762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.024 [2024-11-19 10:25:56.551797] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:43.024 [2024-11-19 10:25:56.551861] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:43.024 [2024-11-19 10:25:56.551902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:43.024 pt2 00:14:43.024 10:25:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.024 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:43.024 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.024 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.024 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.024 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.024 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:43.024 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.024 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.024 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.024 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.024 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.024 10:25:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.024 10:25:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.024 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.024 10:25:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.024 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.024 "name": "raid_bdev1", 00:14:43.024 "uuid": "947df1df-e1ff-4018-b154-529cd944c21f", 00:14:43.024 "strip_size_kb": 64, 00:14:43.024 "state": "configuring", 00:14:43.024 "raid_level": "raid5f", 00:14:43.024 "superblock": true, 00:14:43.024 "num_base_bdevs": 3, 00:14:43.024 "num_base_bdevs_discovered": 1, 00:14:43.024 "num_base_bdevs_operational": 2, 00:14:43.024 "base_bdevs_list": [ 00:14:43.024 { 00:14:43.024 "name": null, 00:14:43.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.024 "is_configured": false, 00:14:43.024 "data_offset": 2048, 00:14:43.024 "data_size": 63488 00:14:43.024 }, 00:14:43.024 { 00:14:43.024 "name": "pt2", 00:14:43.024 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:43.024 "is_configured": true, 00:14:43.024 "data_offset": 2048, 00:14:43.024 "data_size": 63488 00:14:43.024 }, 00:14:43.024 { 00:14:43.024 "name": null, 00:14:43.024 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:43.024 "is_configured": false, 00:14:43.024 "data_offset": 2048, 00:14:43.024 "data_size": 63488 00:14:43.024 } 00:14:43.024 ] 00:14:43.024 }' 00:14:43.024 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.024 10:25:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.284 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:43.284 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:43.284 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:14:43.284 10:25:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:43.284 10:25:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.284 10:25:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.284 [2024-11-19 10:25:57.000939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:43.284 [2024-11-19 10:25:57.001007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.284 [2024-11-19 10:25:57.001029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:43.284 [2024-11-19 10:25:57.001040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.284 [2024-11-19 10:25:57.001463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.284 [2024-11-19 10:25:57.001484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:43.284 [2024-11-19 10:25:57.001557] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:43.284 [2024-11-19 10:25:57.001586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:43.284 [2024-11-19 10:25:57.001701] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:43.284 [2024-11-19 10:25:57.001712] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:43.284 [2024-11-19 10:25:57.001944] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:43.284 [2024-11-19 10:25:57.007214] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:43.284 [2024-11-19 10:25:57.007239] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:43.284 [2024-11-19 10:25:57.007545] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.284 pt3 00:14:43.284 10:25:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.284 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:43.284 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.284 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.284 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.284 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.284 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:43.284 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.284 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.284 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.284 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.284 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.284 10:25:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.284 10:25:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.284 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.284 10:25:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.284 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.284 "name": "raid_bdev1", 00:14:43.284 "uuid": "947df1df-e1ff-4018-b154-529cd944c21f", 00:14:43.284 "strip_size_kb": 64, 00:14:43.284 "state": "online", 00:14:43.284 "raid_level": "raid5f", 00:14:43.284 "superblock": true, 00:14:43.284 "num_base_bdevs": 3, 00:14:43.284 "num_base_bdevs_discovered": 2, 00:14:43.284 "num_base_bdevs_operational": 2, 00:14:43.284 "base_bdevs_list": [ 00:14:43.284 { 00:14:43.284 "name": null, 00:14:43.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.284 "is_configured": false, 00:14:43.284 "data_offset": 2048, 00:14:43.284 "data_size": 63488 00:14:43.284 }, 00:14:43.284 { 00:14:43.284 "name": "pt2", 00:14:43.284 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:43.284 "is_configured": true, 00:14:43.284 "data_offset": 2048, 00:14:43.284 "data_size": 63488 00:14:43.284 }, 00:14:43.284 { 00:14:43.284 "name": "pt3", 00:14:43.284 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:43.284 "is_configured": true, 00:14:43.284 "data_offset": 2048, 00:14:43.284 "data_size": 63488 00:14:43.284 } 00:14:43.284 ] 00:14:43.284 }' 00:14:43.284 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.284 10:25:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.853 [2024-11-19 10:25:57.413114] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:43.853 [2024-11-19 10:25:57.413144] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:43.853 [2024-11-19 10:25:57.413205] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:43.853 [2024-11-19 10:25:57.413258] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:43.853 [2024-11-19 10:25:57.413267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.853 [2024-11-19 10:25:57.485019] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:43.853 [2024-11-19 10:25:57.485065] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.853 [2024-11-19 10:25:57.485081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:43.853 [2024-11-19 10:25:57.485091] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.853 [2024-11-19 10:25:57.487167] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.853 [2024-11-19 10:25:57.487197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:43.853 [2024-11-19 10:25:57.487265] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:43.853 [2024-11-19 10:25:57.487317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:43.853 [2024-11-19 10:25:57.487483] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:43.853 [2024-11-19 10:25:57.487500] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:43.853 [2024-11-19 10:25:57.487515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:43.853 [2024-11-19 10:25:57.487580] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:43.853 pt1 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.853 "name": "raid_bdev1", 00:14:43.853 "uuid": "947df1df-e1ff-4018-b154-529cd944c21f", 00:14:43.853 "strip_size_kb": 64, 00:14:43.853 "state": "configuring", 00:14:43.853 "raid_level": "raid5f", 00:14:43.853 "superblock": true, 00:14:43.853 "num_base_bdevs": 3, 00:14:43.853 "num_base_bdevs_discovered": 1, 00:14:43.853 "num_base_bdevs_operational": 2, 00:14:43.853 "base_bdevs_list": [ 00:14:43.853 { 00:14:43.853 "name": null, 00:14:43.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.853 "is_configured": false, 00:14:43.853 "data_offset": 2048, 00:14:43.853 "data_size": 63488 00:14:43.853 }, 00:14:43.853 { 00:14:43.853 "name": "pt2", 00:14:43.853 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:43.853 "is_configured": true, 00:14:43.853 "data_offset": 2048, 00:14:43.853 "data_size": 63488 00:14:43.853 }, 00:14:43.853 { 00:14:43.853 "name": null, 00:14:43.853 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:43.853 "is_configured": false, 00:14:43.853 "data_offset": 2048, 00:14:43.853 "data_size": 63488 00:14:43.853 } 00:14:43.853 ] 00:14:43.853 }' 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.853 10:25:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.419 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:44.419 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:44.419 10:25:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.419 10:25:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.419 10:25:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.420 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:44.420 10:25:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:44.420 10:25:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.420 10:25:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.420 [2024-11-19 10:25:58.000139] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:44.420 [2024-11-19 10:25:58.000195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:44.420 [2024-11-19 10:25:58.000215] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:44.420 [2024-11-19 10:25:58.000224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:44.420 [2024-11-19 10:25:58.000651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:44.420 [2024-11-19 10:25:58.000668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:44.420 [2024-11-19 10:25:58.000742] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:44.420 [2024-11-19 10:25:58.000764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:44.420 [2024-11-19 10:25:58.000884] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:44.420 [2024-11-19 10:25:58.000892] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:44.420 [2024-11-19 10:25:58.001154] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:44.420 [2024-11-19 10:25:58.007171] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:44.420 [2024-11-19 10:25:58.007199] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:44.420 [2024-11-19 10:25:58.007438] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.420 pt3 00:14:44.420 10:25:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.420 10:25:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:44.420 10:25:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.420 10:25:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.420 10:25:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:44.420 10:25:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:44.420 10:25:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:44.420 10:25:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.420 10:25:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.420 10:25:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.420 10:25:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.420 10:25:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.420 10:25:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.420 10:25:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.420 10:25:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.420 10:25:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.420 10:25:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.420 "name": "raid_bdev1", 00:14:44.420 "uuid": "947df1df-e1ff-4018-b154-529cd944c21f", 00:14:44.420 "strip_size_kb": 64, 00:14:44.420 "state": "online", 00:14:44.420 "raid_level": "raid5f", 00:14:44.420 "superblock": true, 00:14:44.420 "num_base_bdevs": 3, 00:14:44.420 "num_base_bdevs_discovered": 2, 00:14:44.420 "num_base_bdevs_operational": 2, 00:14:44.420 "base_bdevs_list": [ 00:14:44.420 { 00:14:44.420 "name": null, 00:14:44.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.420 "is_configured": false, 00:14:44.420 "data_offset": 2048, 00:14:44.420 "data_size": 63488 00:14:44.420 }, 00:14:44.420 { 00:14:44.420 "name": "pt2", 00:14:44.420 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:44.420 "is_configured": true, 00:14:44.420 "data_offset": 2048, 00:14:44.420 "data_size": 63488 00:14:44.420 }, 00:14:44.420 { 00:14:44.420 "name": "pt3", 00:14:44.420 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:44.420 "is_configured": true, 00:14:44.420 "data_offset": 2048, 00:14:44.420 "data_size": 63488 00:14:44.420 } 00:14:44.420 ] 00:14:44.420 }' 00:14:44.420 10:25:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.420 10:25:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.678 10:25:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:44.678 10:25:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.678 10:25:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.678 10:25:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:44.678 10:25:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.678 10:25:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:44.678 10:25:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:44.678 10:25:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:44.678 10:25:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.678 10:25:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.678 [2024-11-19 10:25:58.457714] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:44.936 10:25:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.936 10:25:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 947df1df-e1ff-4018-b154-529cd944c21f '!=' 947df1df-e1ff-4018-b154-529cd944c21f ']' 00:14:44.936 10:25:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 80829 00:14:44.936 10:25:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 80829 ']' 00:14:44.936 10:25:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 80829 00:14:44.936 10:25:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:44.936 10:25:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:44.936 10:25:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80829 00:14:44.936 killing process with pid 80829 00:14:44.936 10:25:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:44.936 10:25:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:44.936 10:25:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80829' 00:14:44.936 10:25:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 80829 00:14:44.936 [2024-11-19 10:25:58.522377] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:44.936 [2024-11-19 10:25:58.522455] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:44.936 [2024-11-19 10:25:58.522507] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:44.936 [2024-11-19 10:25:58.522517] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:44.936 10:25:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 80829 00:14:45.194 [2024-11-19 10:25:58.799365] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:46.131 10:25:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:46.131 00:14:46.131 real 0m7.384s 00:14:46.131 user 0m11.522s 00:14:46.131 sys 0m1.352s 00:14:46.131 10:25:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:46.131 10:25:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.131 ************************************ 00:14:46.131 END TEST raid5f_superblock_test 00:14:46.131 ************************************ 00:14:46.131 10:25:59 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:46.131 10:25:59 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:14:46.131 10:25:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:46.131 10:25:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:46.131 10:25:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:46.131 ************************************ 00:14:46.131 START TEST raid5f_rebuild_test 00:14:46.131 ************************************ 00:14:46.131 10:25:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:14:46.131 10:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:46.131 10:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:46.131 10:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:46.131 10:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:46.131 10:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:46.131 10:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:46.131 10:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:46.131 10:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:46.131 10:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:46.131 10:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:46.131 10:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:46.131 10:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:46.131 10:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:46.131 10:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:46.131 10:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:46.390 10:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:46.390 10:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:46.390 10:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:46.390 10:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:46.390 10:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:46.391 10:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:46.391 10:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:46.391 10:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:46.391 10:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:46.391 10:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:46.391 10:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:46.391 10:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:46.391 10:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:46.391 10:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81269 00:14:46.391 10:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:46.391 10:25:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81269 00:14:46.391 10:25:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81269 ']' 00:14:46.391 10:25:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.391 10:25:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:46.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.391 10:25:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.391 10:25:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:46.391 10:25:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.391 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:46.391 Zero copy mechanism will not be used. 00:14:46.391 [2024-11-19 10:25:59.999613] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:14:46.391 [2024-11-19 10:25:59.999727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81269 ] 00:14:46.650 [2024-11-19 10:26:00.171017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.650 [2024-11-19 10:26:00.281560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.910 [2024-11-19 10:26:00.471691] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:46.910 [2024-11-19 10:26:00.471744] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:47.169 10:26:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:47.169 10:26:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:47.169 10:26:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:47.169 10:26:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:47.169 10:26:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.169 10:26:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.169 BaseBdev1_malloc 00:14:47.169 10:26:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.169 10:26:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:47.169 10:26:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.169 10:26:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.169 [2024-11-19 10:26:00.845689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:47.169 [2024-11-19 10:26:00.845776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.169 [2024-11-19 10:26:00.845800] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:47.169 [2024-11-19 10:26:00.845810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.169 [2024-11-19 10:26:00.847834] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.169 [2024-11-19 10:26:00.847877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:47.169 BaseBdev1 00:14:47.169 10:26:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.169 10:26:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:47.169 10:26:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:47.169 10:26:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.169 10:26:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.169 BaseBdev2_malloc 00:14:47.169 10:26:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.169 10:26:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:47.169 10:26:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.169 10:26:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.169 [2024-11-19 10:26:00.898113] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:47.169 [2024-11-19 10:26:00.898170] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.169 [2024-11-19 10:26:00.898190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:47.169 [2024-11-19 10:26:00.898202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.169 [2024-11-19 10:26:00.900174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.169 [2024-11-19 10:26:00.900214] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:47.169 BaseBdev2 00:14:47.169 10:26:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.169 10:26:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:47.169 10:26:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:47.169 10:26:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.169 10:26:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.429 BaseBdev3_malloc 00:14:47.429 10:26:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.429 10:26:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:47.429 10:26:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.429 10:26:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.429 [2024-11-19 10:26:00.988796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:47.429 [2024-11-19 10:26:00.988845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.429 [2024-11-19 10:26:00.988866] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:47.429 [2024-11-19 10:26:00.988876] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.429 [2024-11-19 10:26:00.990864] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.429 [2024-11-19 10:26:00.990901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:47.429 BaseBdev3 00:14:47.429 10:26:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.429 10:26:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:47.429 10:26:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.429 10:26:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.429 spare_malloc 00:14:47.429 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.429 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:47.429 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.429 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.429 spare_delay 00:14:47.429 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.429 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:47.429 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.429 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.429 [2024-11-19 10:26:01.053916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:47.429 [2024-11-19 10:26:01.053964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.429 [2024-11-19 10:26:01.053996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:47.429 [2024-11-19 10:26:01.054006] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.429 [2024-11-19 10:26:01.056019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.429 [2024-11-19 10:26:01.056053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:47.429 spare 00:14:47.429 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.429 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:47.429 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.429 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.429 [2024-11-19 10:26:01.065956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:47.429 [2024-11-19 10:26:01.067662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:47.429 [2024-11-19 10:26:01.067723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:47.429 [2024-11-19 10:26:01.067811] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:47.429 [2024-11-19 10:26:01.067821] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:47.429 [2024-11-19 10:26:01.068067] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:47.429 [2024-11-19 10:26:01.073374] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:47.429 [2024-11-19 10:26:01.073396] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:47.429 [2024-11-19 10:26:01.073574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.429 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.429 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:47.429 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.429 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.429 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.429 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.429 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.429 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.429 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.429 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.429 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.429 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.429 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.429 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.429 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.429 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.429 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.429 "name": "raid_bdev1", 00:14:47.429 "uuid": "e36d741a-d378-4793-8901-dc00c08791a5", 00:14:47.429 "strip_size_kb": 64, 00:14:47.429 "state": "online", 00:14:47.429 "raid_level": "raid5f", 00:14:47.429 "superblock": false, 00:14:47.429 "num_base_bdevs": 3, 00:14:47.429 "num_base_bdevs_discovered": 3, 00:14:47.429 "num_base_bdevs_operational": 3, 00:14:47.429 "base_bdevs_list": [ 00:14:47.429 { 00:14:47.429 "name": "BaseBdev1", 00:14:47.430 "uuid": "cf4d6bb7-57d9-524b-916a-75dac0ae5aee", 00:14:47.430 "is_configured": true, 00:14:47.430 "data_offset": 0, 00:14:47.430 "data_size": 65536 00:14:47.430 }, 00:14:47.430 { 00:14:47.430 "name": "BaseBdev2", 00:14:47.430 "uuid": "ebcea601-b233-5542-8748-81ab296d2268", 00:14:47.430 "is_configured": true, 00:14:47.430 "data_offset": 0, 00:14:47.430 "data_size": 65536 00:14:47.430 }, 00:14:47.430 { 00:14:47.430 "name": "BaseBdev3", 00:14:47.430 "uuid": "d2739bc3-79fc-555d-9961-d3c0c1e7c688", 00:14:47.430 "is_configured": true, 00:14:47.430 "data_offset": 0, 00:14:47.430 "data_size": 65536 00:14:47.430 } 00:14:47.430 ] 00:14:47.430 }' 00:14:47.430 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.430 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.998 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:47.998 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:47.998 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.998 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.998 [2024-11-19 10:26:01.547253] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:47.998 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.998 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:14:47.999 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:47.999 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.999 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.999 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.999 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.999 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:47.999 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:47.999 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:47.999 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:47.999 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:47.999 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:47.999 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:47.999 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:47.999 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:47.999 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:47.999 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:47.999 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:47.999 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:47.999 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:48.259 [2024-11-19 10:26:01.782694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:48.259 /dev/nbd0 00:14:48.259 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:48.259 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:48.259 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:48.259 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:48.259 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:48.259 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:48.259 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:48.259 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:48.259 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:48.259 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:48.259 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:48.259 1+0 records in 00:14:48.259 1+0 records out 00:14:48.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315216 s, 13.0 MB/s 00:14:48.259 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.259 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:48.259 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.259 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:48.259 10:26:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:48.259 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:48.259 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:48.259 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:48.259 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:48.259 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:48.259 10:26:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:14:48.518 512+0 records in 00:14:48.518 512+0 records out 00:14:48.518 67108864 bytes (67 MB, 64 MiB) copied, 0.354137 s, 189 MB/s 00:14:48.518 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:48.518 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:48.518 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:48.518 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:48.518 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:48.518 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:48.518 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:48.777 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:48.777 [2024-11-19 10:26:02.437659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.777 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:48.777 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:48.777 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:48.777 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:48.777 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:48.777 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:48.777 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:48.777 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:48.777 10:26:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.777 10:26:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.777 [2024-11-19 10:26:02.453301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:48.777 10:26:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.777 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:48.777 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.777 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.777 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:48.777 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:48.778 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:48.778 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.778 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.778 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.778 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.778 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.778 10:26:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.778 10:26:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.778 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.778 10:26:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.778 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.778 "name": "raid_bdev1", 00:14:48.778 "uuid": "e36d741a-d378-4793-8901-dc00c08791a5", 00:14:48.778 "strip_size_kb": 64, 00:14:48.778 "state": "online", 00:14:48.778 "raid_level": "raid5f", 00:14:48.778 "superblock": false, 00:14:48.778 "num_base_bdevs": 3, 00:14:48.778 "num_base_bdevs_discovered": 2, 00:14:48.778 "num_base_bdevs_operational": 2, 00:14:48.778 "base_bdevs_list": [ 00:14:48.778 { 00:14:48.778 "name": null, 00:14:48.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.778 "is_configured": false, 00:14:48.778 "data_offset": 0, 00:14:48.778 "data_size": 65536 00:14:48.778 }, 00:14:48.778 { 00:14:48.778 "name": "BaseBdev2", 00:14:48.778 "uuid": "ebcea601-b233-5542-8748-81ab296d2268", 00:14:48.778 "is_configured": true, 00:14:48.778 "data_offset": 0, 00:14:48.778 "data_size": 65536 00:14:48.778 }, 00:14:48.778 { 00:14:48.778 "name": "BaseBdev3", 00:14:48.778 "uuid": "d2739bc3-79fc-555d-9961-d3c0c1e7c688", 00:14:48.778 "is_configured": true, 00:14:48.778 "data_offset": 0, 00:14:48.778 "data_size": 65536 00:14:48.778 } 00:14:48.778 ] 00:14:48.778 }' 00:14:48.778 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.778 10:26:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.348 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:49.348 10:26:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.348 10:26:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:49.348 [2024-11-19 10:26:02.880598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:49.348 [2024-11-19 10:26:02.896057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:14:49.348 10:26:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.348 10:26:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:49.348 [2024-11-19 10:26:02.903213] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:50.288 10:26:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.288 10:26:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.288 10:26:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:50.288 10:26:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:50.288 10:26:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.288 10:26:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.288 10:26:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.288 10:26:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.288 10:26:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.288 10:26:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.288 10:26:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.288 "name": "raid_bdev1", 00:14:50.288 "uuid": "e36d741a-d378-4793-8901-dc00c08791a5", 00:14:50.288 "strip_size_kb": 64, 00:14:50.288 "state": "online", 00:14:50.288 "raid_level": "raid5f", 00:14:50.288 "superblock": false, 00:14:50.288 "num_base_bdevs": 3, 00:14:50.288 "num_base_bdevs_discovered": 3, 00:14:50.288 "num_base_bdevs_operational": 3, 00:14:50.288 "process": { 00:14:50.288 "type": "rebuild", 00:14:50.288 "target": "spare", 00:14:50.288 "progress": { 00:14:50.288 "blocks": 20480, 00:14:50.288 "percent": 15 00:14:50.288 } 00:14:50.288 }, 00:14:50.288 "base_bdevs_list": [ 00:14:50.288 { 00:14:50.288 "name": "spare", 00:14:50.288 "uuid": "f770a145-8d90-5267-9261-882b40f64392", 00:14:50.288 "is_configured": true, 00:14:50.288 "data_offset": 0, 00:14:50.288 "data_size": 65536 00:14:50.288 }, 00:14:50.288 { 00:14:50.288 "name": "BaseBdev2", 00:14:50.288 "uuid": "ebcea601-b233-5542-8748-81ab296d2268", 00:14:50.288 "is_configured": true, 00:14:50.288 "data_offset": 0, 00:14:50.288 "data_size": 65536 00:14:50.288 }, 00:14:50.288 { 00:14:50.288 "name": "BaseBdev3", 00:14:50.288 "uuid": "d2739bc3-79fc-555d-9961-d3c0c1e7c688", 00:14:50.288 "is_configured": true, 00:14:50.288 "data_offset": 0, 00:14:50.288 "data_size": 65536 00:14:50.288 } 00:14:50.288 ] 00:14:50.288 }' 00:14:50.288 10:26:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.288 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:50.288 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.288 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:50.288 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:50.288 10:26:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.288 10:26:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.288 [2024-11-19 10:26:04.042347] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:50.548 [2024-11-19 10:26:04.110441] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:50.548 [2024-11-19 10:26:04.110539] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:50.548 [2024-11-19 10:26:04.110596] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:50.548 [2024-11-19 10:26:04.110617] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:50.548 10:26:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.548 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:50.548 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.548 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.548 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:50.548 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:50.548 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:50.548 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.548 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.548 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.548 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.548 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.548 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.548 10:26:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.548 10:26:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.548 10:26:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.548 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.548 "name": "raid_bdev1", 00:14:50.548 "uuid": "e36d741a-d378-4793-8901-dc00c08791a5", 00:14:50.548 "strip_size_kb": 64, 00:14:50.548 "state": "online", 00:14:50.548 "raid_level": "raid5f", 00:14:50.548 "superblock": false, 00:14:50.548 "num_base_bdevs": 3, 00:14:50.548 "num_base_bdevs_discovered": 2, 00:14:50.548 "num_base_bdevs_operational": 2, 00:14:50.548 "base_bdevs_list": [ 00:14:50.548 { 00:14:50.548 "name": null, 00:14:50.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.548 "is_configured": false, 00:14:50.548 "data_offset": 0, 00:14:50.548 "data_size": 65536 00:14:50.548 }, 00:14:50.548 { 00:14:50.548 "name": "BaseBdev2", 00:14:50.548 "uuid": "ebcea601-b233-5542-8748-81ab296d2268", 00:14:50.548 "is_configured": true, 00:14:50.548 "data_offset": 0, 00:14:50.548 "data_size": 65536 00:14:50.548 }, 00:14:50.548 { 00:14:50.548 "name": "BaseBdev3", 00:14:50.548 "uuid": "d2739bc3-79fc-555d-9961-d3c0c1e7c688", 00:14:50.548 "is_configured": true, 00:14:50.548 "data_offset": 0, 00:14:50.548 "data_size": 65536 00:14:50.548 } 00:14:50.548 ] 00:14:50.548 }' 00:14:50.548 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.548 10:26:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.808 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:50.808 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.808 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:50.808 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:50.808 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.068 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.068 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.068 10:26:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.068 10:26:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.068 10:26:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.068 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.068 "name": "raid_bdev1", 00:14:51.068 "uuid": "e36d741a-d378-4793-8901-dc00c08791a5", 00:14:51.068 "strip_size_kb": 64, 00:14:51.068 "state": "online", 00:14:51.068 "raid_level": "raid5f", 00:14:51.068 "superblock": false, 00:14:51.068 "num_base_bdevs": 3, 00:14:51.068 "num_base_bdevs_discovered": 2, 00:14:51.068 "num_base_bdevs_operational": 2, 00:14:51.068 "base_bdevs_list": [ 00:14:51.068 { 00:14:51.068 "name": null, 00:14:51.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.068 "is_configured": false, 00:14:51.068 "data_offset": 0, 00:14:51.068 "data_size": 65536 00:14:51.068 }, 00:14:51.068 { 00:14:51.068 "name": "BaseBdev2", 00:14:51.068 "uuid": "ebcea601-b233-5542-8748-81ab296d2268", 00:14:51.068 "is_configured": true, 00:14:51.068 "data_offset": 0, 00:14:51.068 "data_size": 65536 00:14:51.068 }, 00:14:51.068 { 00:14:51.068 "name": "BaseBdev3", 00:14:51.068 "uuid": "d2739bc3-79fc-555d-9961-d3c0c1e7c688", 00:14:51.068 "is_configured": true, 00:14:51.068 "data_offset": 0, 00:14:51.068 "data_size": 65536 00:14:51.068 } 00:14:51.068 ] 00:14:51.068 }' 00:14:51.068 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.068 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:51.068 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.068 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:51.068 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:51.068 10:26:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.068 10:26:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:51.068 [2024-11-19 10:26:04.719075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:51.068 [2024-11-19 10:26:04.734644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:14:51.068 10:26:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.068 10:26:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:51.068 [2024-11-19 10:26:04.741547] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:52.018 10:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:52.018 10:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.018 10:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:52.018 10:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:52.018 10:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.019 10:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.019 10:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.019 10:26:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.019 10:26:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.019 10:26:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.019 10:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.019 "name": "raid_bdev1", 00:14:52.019 "uuid": "e36d741a-d378-4793-8901-dc00c08791a5", 00:14:52.019 "strip_size_kb": 64, 00:14:52.019 "state": "online", 00:14:52.019 "raid_level": "raid5f", 00:14:52.019 "superblock": false, 00:14:52.019 "num_base_bdevs": 3, 00:14:52.019 "num_base_bdevs_discovered": 3, 00:14:52.019 "num_base_bdevs_operational": 3, 00:14:52.019 "process": { 00:14:52.019 "type": "rebuild", 00:14:52.019 "target": "spare", 00:14:52.019 "progress": { 00:14:52.019 "blocks": 20480, 00:14:52.019 "percent": 15 00:14:52.019 } 00:14:52.019 }, 00:14:52.019 "base_bdevs_list": [ 00:14:52.019 { 00:14:52.019 "name": "spare", 00:14:52.019 "uuid": "f770a145-8d90-5267-9261-882b40f64392", 00:14:52.019 "is_configured": true, 00:14:52.019 "data_offset": 0, 00:14:52.019 "data_size": 65536 00:14:52.019 }, 00:14:52.019 { 00:14:52.019 "name": "BaseBdev2", 00:14:52.019 "uuid": "ebcea601-b233-5542-8748-81ab296d2268", 00:14:52.019 "is_configured": true, 00:14:52.019 "data_offset": 0, 00:14:52.019 "data_size": 65536 00:14:52.019 }, 00:14:52.019 { 00:14:52.019 "name": "BaseBdev3", 00:14:52.019 "uuid": "d2739bc3-79fc-555d-9961-d3c0c1e7c688", 00:14:52.019 "is_configured": true, 00:14:52.019 "data_offset": 0, 00:14:52.019 "data_size": 65536 00:14:52.019 } 00:14:52.019 ] 00:14:52.019 }' 00:14:52.019 10:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.294 10:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:52.294 10:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.294 10:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:52.294 10:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:52.294 10:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:52.294 10:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:52.294 10:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=530 00:14:52.294 10:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:52.294 10:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:52.294 10:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.294 10:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:52.294 10:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:52.294 10:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.294 10:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.294 10:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.294 10:26:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.294 10:26:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.294 10:26:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.294 10:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.294 "name": "raid_bdev1", 00:14:52.294 "uuid": "e36d741a-d378-4793-8901-dc00c08791a5", 00:14:52.294 "strip_size_kb": 64, 00:14:52.294 "state": "online", 00:14:52.294 "raid_level": "raid5f", 00:14:52.294 "superblock": false, 00:14:52.294 "num_base_bdevs": 3, 00:14:52.294 "num_base_bdevs_discovered": 3, 00:14:52.294 "num_base_bdevs_operational": 3, 00:14:52.294 "process": { 00:14:52.294 "type": "rebuild", 00:14:52.294 "target": "spare", 00:14:52.294 "progress": { 00:14:52.294 "blocks": 22528, 00:14:52.294 "percent": 17 00:14:52.294 } 00:14:52.294 }, 00:14:52.294 "base_bdevs_list": [ 00:14:52.294 { 00:14:52.294 "name": "spare", 00:14:52.294 "uuid": "f770a145-8d90-5267-9261-882b40f64392", 00:14:52.294 "is_configured": true, 00:14:52.294 "data_offset": 0, 00:14:52.294 "data_size": 65536 00:14:52.294 }, 00:14:52.294 { 00:14:52.294 "name": "BaseBdev2", 00:14:52.294 "uuid": "ebcea601-b233-5542-8748-81ab296d2268", 00:14:52.294 "is_configured": true, 00:14:52.294 "data_offset": 0, 00:14:52.294 "data_size": 65536 00:14:52.294 }, 00:14:52.294 { 00:14:52.294 "name": "BaseBdev3", 00:14:52.294 "uuid": "d2739bc3-79fc-555d-9961-d3c0c1e7c688", 00:14:52.294 "is_configured": true, 00:14:52.294 "data_offset": 0, 00:14:52.294 "data_size": 65536 00:14:52.294 } 00:14:52.294 ] 00:14:52.294 }' 00:14:52.294 10:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.294 10:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:52.294 10:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.294 10:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:52.294 10:26:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:53.230 10:26:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:53.230 10:26:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.230 10:26:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.230 10:26:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.230 10:26:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.230 10:26:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.230 10:26:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.230 10:26:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.230 10:26:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.230 10:26:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.230 10:26:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.489 10:26:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.489 "name": "raid_bdev1", 00:14:53.489 "uuid": "e36d741a-d378-4793-8901-dc00c08791a5", 00:14:53.489 "strip_size_kb": 64, 00:14:53.489 "state": "online", 00:14:53.489 "raid_level": "raid5f", 00:14:53.489 "superblock": false, 00:14:53.489 "num_base_bdevs": 3, 00:14:53.489 "num_base_bdevs_discovered": 3, 00:14:53.489 "num_base_bdevs_operational": 3, 00:14:53.489 "process": { 00:14:53.489 "type": "rebuild", 00:14:53.489 "target": "spare", 00:14:53.489 "progress": { 00:14:53.489 "blocks": 45056, 00:14:53.489 "percent": 34 00:14:53.489 } 00:14:53.489 }, 00:14:53.489 "base_bdevs_list": [ 00:14:53.489 { 00:14:53.489 "name": "spare", 00:14:53.489 "uuid": "f770a145-8d90-5267-9261-882b40f64392", 00:14:53.489 "is_configured": true, 00:14:53.489 "data_offset": 0, 00:14:53.489 "data_size": 65536 00:14:53.489 }, 00:14:53.489 { 00:14:53.489 "name": "BaseBdev2", 00:14:53.489 "uuid": "ebcea601-b233-5542-8748-81ab296d2268", 00:14:53.489 "is_configured": true, 00:14:53.489 "data_offset": 0, 00:14:53.489 "data_size": 65536 00:14:53.489 }, 00:14:53.489 { 00:14:53.489 "name": "BaseBdev3", 00:14:53.489 "uuid": "d2739bc3-79fc-555d-9961-d3c0c1e7c688", 00:14:53.489 "is_configured": true, 00:14:53.489 "data_offset": 0, 00:14:53.489 "data_size": 65536 00:14:53.490 } 00:14:53.490 ] 00:14:53.490 }' 00:14:53.490 10:26:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.490 10:26:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.490 10:26:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.490 10:26:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.490 10:26:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:54.425 10:26:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:54.425 10:26:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.425 10:26:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.425 10:26:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:54.425 10:26:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:54.425 10:26:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.425 10:26:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.425 10:26:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.425 10:26:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.425 10:26:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.425 10:26:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.425 10:26:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.425 "name": "raid_bdev1", 00:14:54.425 "uuid": "e36d741a-d378-4793-8901-dc00c08791a5", 00:14:54.425 "strip_size_kb": 64, 00:14:54.425 "state": "online", 00:14:54.425 "raid_level": "raid5f", 00:14:54.425 "superblock": false, 00:14:54.425 "num_base_bdevs": 3, 00:14:54.425 "num_base_bdevs_discovered": 3, 00:14:54.425 "num_base_bdevs_operational": 3, 00:14:54.425 "process": { 00:14:54.425 "type": "rebuild", 00:14:54.425 "target": "spare", 00:14:54.425 "progress": { 00:14:54.425 "blocks": 67584, 00:14:54.425 "percent": 51 00:14:54.425 } 00:14:54.425 }, 00:14:54.425 "base_bdevs_list": [ 00:14:54.425 { 00:14:54.425 "name": "spare", 00:14:54.425 "uuid": "f770a145-8d90-5267-9261-882b40f64392", 00:14:54.425 "is_configured": true, 00:14:54.425 "data_offset": 0, 00:14:54.425 "data_size": 65536 00:14:54.425 }, 00:14:54.425 { 00:14:54.425 "name": "BaseBdev2", 00:14:54.425 "uuid": "ebcea601-b233-5542-8748-81ab296d2268", 00:14:54.425 "is_configured": true, 00:14:54.425 "data_offset": 0, 00:14:54.425 "data_size": 65536 00:14:54.425 }, 00:14:54.425 { 00:14:54.425 "name": "BaseBdev3", 00:14:54.425 "uuid": "d2739bc3-79fc-555d-9961-d3c0c1e7c688", 00:14:54.425 "is_configured": true, 00:14:54.425 "data_offset": 0, 00:14:54.425 "data_size": 65536 00:14:54.425 } 00:14:54.425 ] 00:14:54.425 }' 00:14:54.425 10:26:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.685 10:26:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:54.685 10:26:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.685 10:26:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:54.685 10:26:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:55.619 10:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:55.619 10:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:55.619 10:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.619 10:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:55.619 10:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:55.619 10:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.619 10:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.619 10:26:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.619 10:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.619 10:26:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.619 10:26:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.619 10:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.619 "name": "raid_bdev1", 00:14:55.619 "uuid": "e36d741a-d378-4793-8901-dc00c08791a5", 00:14:55.619 "strip_size_kb": 64, 00:14:55.619 "state": "online", 00:14:55.619 "raid_level": "raid5f", 00:14:55.619 "superblock": false, 00:14:55.619 "num_base_bdevs": 3, 00:14:55.619 "num_base_bdevs_discovered": 3, 00:14:55.619 "num_base_bdevs_operational": 3, 00:14:55.619 "process": { 00:14:55.619 "type": "rebuild", 00:14:55.619 "target": "spare", 00:14:55.619 "progress": { 00:14:55.619 "blocks": 92160, 00:14:55.619 "percent": 70 00:14:55.619 } 00:14:55.619 }, 00:14:55.619 "base_bdevs_list": [ 00:14:55.619 { 00:14:55.619 "name": "spare", 00:14:55.619 "uuid": "f770a145-8d90-5267-9261-882b40f64392", 00:14:55.619 "is_configured": true, 00:14:55.619 "data_offset": 0, 00:14:55.619 "data_size": 65536 00:14:55.619 }, 00:14:55.619 { 00:14:55.619 "name": "BaseBdev2", 00:14:55.619 "uuid": "ebcea601-b233-5542-8748-81ab296d2268", 00:14:55.619 "is_configured": true, 00:14:55.619 "data_offset": 0, 00:14:55.619 "data_size": 65536 00:14:55.619 }, 00:14:55.619 { 00:14:55.619 "name": "BaseBdev3", 00:14:55.619 "uuid": "d2739bc3-79fc-555d-9961-d3c0c1e7c688", 00:14:55.619 "is_configured": true, 00:14:55.619 "data_offset": 0, 00:14:55.619 "data_size": 65536 00:14:55.619 } 00:14:55.619 ] 00:14:55.619 }' 00:14:55.619 10:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.619 10:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:55.619 10:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.878 10:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:55.878 10:26:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:56.819 10:26:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:56.819 10:26:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.819 10:26:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.819 10:26:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.819 10:26:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.819 10:26:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.819 10:26:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.819 10:26:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.819 10:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.819 10:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.819 10:26:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.819 10:26:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.819 "name": "raid_bdev1", 00:14:56.819 "uuid": "e36d741a-d378-4793-8901-dc00c08791a5", 00:14:56.819 "strip_size_kb": 64, 00:14:56.819 "state": "online", 00:14:56.819 "raid_level": "raid5f", 00:14:56.819 "superblock": false, 00:14:56.819 "num_base_bdevs": 3, 00:14:56.819 "num_base_bdevs_discovered": 3, 00:14:56.819 "num_base_bdevs_operational": 3, 00:14:56.819 "process": { 00:14:56.819 "type": "rebuild", 00:14:56.819 "target": "spare", 00:14:56.819 "progress": { 00:14:56.819 "blocks": 114688, 00:14:56.819 "percent": 87 00:14:56.819 } 00:14:56.819 }, 00:14:56.819 "base_bdevs_list": [ 00:14:56.819 { 00:14:56.819 "name": "spare", 00:14:56.819 "uuid": "f770a145-8d90-5267-9261-882b40f64392", 00:14:56.819 "is_configured": true, 00:14:56.819 "data_offset": 0, 00:14:56.819 "data_size": 65536 00:14:56.819 }, 00:14:56.819 { 00:14:56.819 "name": "BaseBdev2", 00:14:56.819 "uuid": "ebcea601-b233-5542-8748-81ab296d2268", 00:14:56.819 "is_configured": true, 00:14:56.819 "data_offset": 0, 00:14:56.819 "data_size": 65536 00:14:56.819 }, 00:14:56.819 { 00:14:56.820 "name": "BaseBdev3", 00:14:56.820 "uuid": "d2739bc3-79fc-555d-9961-d3c0c1e7c688", 00:14:56.820 "is_configured": true, 00:14:56.820 "data_offset": 0, 00:14:56.820 "data_size": 65536 00:14:56.820 } 00:14:56.820 ] 00:14:56.820 }' 00:14:56.820 10:26:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.820 10:26:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:56.820 10:26:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.820 10:26:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.820 10:26:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:57.760 [2024-11-19 10:26:11.176809] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:57.760 [2024-11-19 10:26:11.176924] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:57.760 [2024-11-19 10:26:11.176985] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.020 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:58.020 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:58.020 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.020 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:58.020 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:58.020 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.020 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.020 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.020 10:26:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.020 10:26:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.020 10:26:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.020 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.020 "name": "raid_bdev1", 00:14:58.020 "uuid": "e36d741a-d378-4793-8901-dc00c08791a5", 00:14:58.020 "strip_size_kb": 64, 00:14:58.020 "state": "online", 00:14:58.020 "raid_level": "raid5f", 00:14:58.020 "superblock": false, 00:14:58.020 "num_base_bdevs": 3, 00:14:58.020 "num_base_bdevs_discovered": 3, 00:14:58.020 "num_base_bdevs_operational": 3, 00:14:58.020 "base_bdevs_list": [ 00:14:58.020 { 00:14:58.020 "name": "spare", 00:14:58.020 "uuid": "f770a145-8d90-5267-9261-882b40f64392", 00:14:58.020 "is_configured": true, 00:14:58.020 "data_offset": 0, 00:14:58.020 "data_size": 65536 00:14:58.020 }, 00:14:58.020 { 00:14:58.020 "name": "BaseBdev2", 00:14:58.020 "uuid": "ebcea601-b233-5542-8748-81ab296d2268", 00:14:58.020 "is_configured": true, 00:14:58.020 "data_offset": 0, 00:14:58.020 "data_size": 65536 00:14:58.020 }, 00:14:58.020 { 00:14:58.020 "name": "BaseBdev3", 00:14:58.020 "uuid": "d2739bc3-79fc-555d-9961-d3c0c1e7c688", 00:14:58.020 "is_configured": true, 00:14:58.020 "data_offset": 0, 00:14:58.020 "data_size": 65536 00:14:58.020 } 00:14:58.020 ] 00:14:58.020 }' 00:14:58.020 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.021 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:58.021 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.021 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:58.021 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:58.021 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:58.021 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.021 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:58.021 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:58.021 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.021 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.021 10:26:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.021 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.021 10:26:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.021 10:26:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.021 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.021 "name": "raid_bdev1", 00:14:58.021 "uuid": "e36d741a-d378-4793-8901-dc00c08791a5", 00:14:58.021 "strip_size_kb": 64, 00:14:58.021 "state": "online", 00:14:58.021 "raid_level": "raid5f", 00:14:58.021 "superblock": false, 00:14:58.021 "num_base_bdevs": 3, 00:14:58.021 "num_base_bdevs_discovered": 3, 00:14:58.021 "num_base_bdevs_operational": 3, 00:14:58.021 "base_bdevs_list": [ 00:14:58.021 { 00:14:58.021 "name": "spare", 00:14:58.021 "uuid": "f770a145-8d90-5267-9261-882b40f64392", 00:14:58.021 "is_configured": true, 00:14:58.021 "data_offset": 0, 00:14:58.021 "data_size": 65536 00:14:58.021 }, 00:14:58.021 { 00:14:58.021 "name": "BaseBdev2", 00:14:58.021 "uuid": "ebcea601-b233-5542-8748-81ab296d2268", 00:14:58.021 "is_configured": true, 00:14:58.021 "data_offset": 0, 00:14:58.021 "data_size": 65536 00:14:58.021 }, 00:14:58.021 { 00:14:58.021 "name": "BaseBdev3", 00:14:58.021 "uuid": "d2739bc3-79fc-555d-9961-d3c0c1e7c688", 00:14:58.021 "is_configured": true, 00:14:58.021 "data_offset": 0, 00:14:58.021 "data_size": 65536 00:14:58.021 } 00:14:58.021 ] 00:14:58.021 }' 00:14:58.021 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.021 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:58.021 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.281 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:58.281 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:58.281 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:58.281 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.281 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.281 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.281 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.281 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.281 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.281 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.281 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.281 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.281 10:26:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.281 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.281 10:26:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.281 10:26:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.281 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.281 "name": "raid_bdev1", 00:14:58.281 "uuid": "e36d741a-d378-4793-8901-dc00c08791a5", 00:14:58.281 "strip_size_kb": 64, 00:14:58.281 "state": "online", 00:14:58.281 "raid_level": "raid5f", 00:14:58.281 "superblock": false, 00:14:58.281 "num_base_bdevs": 3, 00:14:58.281 "num_base_bdevs_discovered": 3, 00:14:58.281 "num_base_bdevs_operational": 3, 00:14:58.281 "base_bdevs_list": [ 00:14:58.281 { 00:14:58.281 "name": "spare", 00:14:58.281 "uuid": "f770a145-8d90-5267-9261-882b40f64392", 00:14:58.281 "is_configured": true, 00:14:58.281 "data_offset": 0, 00:14:58.281 "data_size": 65536 00:14:58.281 }, 00:14:58.281 { 00:14:58.281 "name": "BaseBdev2", 00:14:58.281 "uuid": "ebcea601-b233-5542-8748-81ab296d2268", 00:14:58.281 "is_configured": true, 00:14:58.281 "data_offset": 0, 00:14:58.281 "data_size": 65536 00:14:58.281 }, 00:14:58.281 { 00:14:58.281 "name": "BaseBdev3", 00:14:58.281 "uuid": "d2739bc3-79fc-555d-9961-d3c0c1e7c688", 00:14:58.281 "is_configured": true, 00:14:58.281 "data_offset": 0, 00:14:58.281 "data_size": 65536 00:14:58.281 } 00:14:58.281 ] 00:14:58.281 }' 00:14:58.281 10:26:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.281 10:26:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.541 10:26:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:58.541 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.541 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.541 [2024-11-19 10:26:12.276605] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:58.541 [2024-11-19 10:26:12.276672] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:58.541 [2024-11-19 10:26:12.276787] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:58.541 [2024-11-19 10:26:12.276883] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:58.541 [2024-11-19 10:26:12.276899] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:58.541 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.541 10:26:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.541 10:26:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:58.541 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.541 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.541 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.801 10:26:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:58.801 10:26:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:58.801 10:26:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:58.801 10:26:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:58.801 10:26:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:58.801 10:26:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:58.801 10:26:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:58.801 10:26:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:58.801 10:26:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:58.801 10:26:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:58.801 10:26:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:58.801 10:26:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:58.801 10:26:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:58.801 /dev/nbd0 00:14:58.801 10:26:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:58.801 10:26:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:58.801 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:59.061 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:59.061 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:59.061 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:59.061 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:59.061 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:59.061 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:59.061 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:59.061 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:59.061 1+0 records in 00:14:59.061 1+0 records out 00:14:59.061 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353699 s, 11.6 MB/s 00:14:59.061 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.061 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:59.061 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.061 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:59.061 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:59.061 10:26:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:59.061 10:26:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:59.061 10:26:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:59.061 /dev/nbd1 00:14:59.061 10:26:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:59.061 10:26:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:59.061 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:59.061 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:59.061 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:59.061 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:59.061 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:59.061 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:59.061 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:59.061 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:59.061 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:59.061 1+0 records in 00:14:59.061 1+0 records out 00:14:59.061 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423488 s, 9.7 MB/s 00:14:59.321 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.321 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:59.321 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:59.321 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:59.321 10:26:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:59.321 10:26:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:59.321 10:26:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:59.321 10:26:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:59.321 10:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:59.321 10:26:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:59.321 10:26:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:59.321 10:26:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:59.321 10:26:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:59.321 10:26:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:59.321 10:26:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:59.579 10:26:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:59.579 10:26:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:59.580 10:26:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:59.580 10:26:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:59.580 10:26:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:59.580 10:26:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:59.580 10:26:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:59.580 10:26:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:59.580 10:26:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:59.580 10:26:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:59.840 10:26:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:59.840 10:26:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:59.840 10:26:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:59.840 10:26:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:59.840 10:26:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:59.840 10:26:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:59.840 10:26:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:59.840 10:26:13 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:59.840 10:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:59.840 10:26:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81269 00:14:59.840 10:26:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81269 ']' 00:14:59.840 10:26:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81269 00:14:59.840 10:26:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:59.840 10:26:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:59.840 10:26:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81269 00:14:59.840 10:26:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:59.840 killing process with pid 81269 00:14:59.840 Received shutdown signal, test time was about 60.000000 seconds 00:14:59.840 00:14:59.840 Latency(us) 00:14:59.840 [2024-11-19T10:26:13.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.840 [2024-11-19T10:26:13.621Z] =================================================================================================================== 00:14:59.840 [2024-11-19T10:26:13.621Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:59.840 10:26:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:59.840 10:26:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81269' 00:14:59.840 10:26:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81269 00:14:59.840 [2024-11-19 10:26:13.523335] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:59.840 10:26:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81269 00:15:00.434 [2024-11-19 10:26:13.891028] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:01.369 ************************************ 00:15:01.369 END TEST raid5f_rebuild_test 00:15:01.369 ************************************ 00:15:01.369 00:15:01.369 real 0m14.994s 00:15:01.369 user 0m18.425s 00:15:01.369 sys 0m1.967s 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.369 10:26:14 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:01.369 10:26:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:01.369 10:26:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.369 10:26:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:01.369 ************************************ 00:15:01.369 START TEST raid5f_rebuild_test_sb 00:15:01.369 ************************************ 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=81704 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 81704 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81704 ']' 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:01.369 10:26:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.369 [2024-11-19 10:26:15.074176] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:15:01.369 [2024-11-19 10:26:15.074361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:01.369 Zero copy mechanism will not be used. 00:15:01.369 -allocations --file-prefix=spdk_pid81704 ] 00:15:01.627 [2024-11-19 10:26:15.246917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.627 [2024-11-19 10:26:15.350888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.886 [2024-11-19 10:26:15.546623] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:01.886 [2024-11-19 10:26:15.546674] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:02.144 10:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:02.144 10:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:02.144 10:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:02.144 10:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:02.144 10:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.144 10:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.404 BaseBdev1_malloc 00:15:02.404 10:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.404 10:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:02.404 10:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.404 10:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.404 [2024-11-19 10:26:15.938337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:02.404 [2024-11-19 10:26:15.938450] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.404 [2024-11-19 10:26:15.938505] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:02.404 [2024-11-19 10:26:15.938544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.404 [2024-11-19 10:26:15.940624] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.404 [2024-11-19 10:26:15.940699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:02.404 BaseBdev1 00:15:02.404 10:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.404 10:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:02.404 10:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:02.404 10:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.404 10:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.404 BaseBdev2_malloc 00:15:02.404 10:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.404 10:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:02.404 10:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.404 10:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.404 [2024-11-19 10:26:15.991567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:02.404 [2024-11-19 10:26:15.991671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.404 [2024-11-19 10:26:15.991694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:02.404 [2024-11-19 10:26:15.991704] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.404 [2024-11-19 10:26:15.993671] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.404 [2024-11-19 10:26:15.993709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:02.404 BaseBdev2 00:15:02.404 10:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.404 10:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:02.404 10:26:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:02.404 10:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.404 10:26:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.404 BaseBdev3_malloc 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.404 [2024-11-19 10:26:16.077348] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:02.404 [2024-11-19 10:26:16.077466] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.404 [2024-11-19 10:26:16.077505] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:02.404 [2024-11-19 10:26:16.077543] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.404 [2024-11-19 10:26:16.079553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.404 [2024-11-19 10:26:16.079593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:02.404 BaseBdev3 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.404 spare_malloc 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.404 spare_delay 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.404 [2024-11-19 10:26:16.142612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:02.404 [2024-11-19 10:26:16.142726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.404 [2024-11-19 10:26:16.142759] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:02.404 [2024-11-19 10:26:16.142789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.404 [2024-11-19 10:26:16.144846] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.404 [2024-11-19 10:26:16.144924] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:02.404 spare 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.404 [2024-11-19 10:26:16.154653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:02.404 [2024-11-19 10:26:16.156379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:02.404 [2024-11-19 10:26:16.156441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:02.404 [2024-11-19 10:26:16.156600] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:02.404 [2024-11-19 10:26:16.156637] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:02.404 [2024-11-19 10:26:16.156872] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:02.404 [2024-11-19 10:26:16.162124] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:02.404 [2024-11-19 10:26:16.162149] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:02.404 [2024-11-19 10:26:16.162323] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.404 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.663 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.663 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.663 "name": "raid_bdev1", 00:15:02.663 "uuid": "7dad1f70-0144-40ba-b10c-6b1e22b0323d", 00:15:02.663 "strip_size_kb": 64, 00:15:02.663 "state": "online", 00:15:02.663 "raid_level": "raid5f", 00:15:02.663 "superblock": true, 00:15:02.663 "num_base_bdevs": 3, 00:15:02.663 "num_base_bdevs_discovered": 3, 00:15:02.663 "num_base_bdevs_operational": 3, 00:15:02.663 "base_bdevs_list": [ 00:15:02.663 { 00:15:02.663 "name": "BaseBdev1", 00:15:02.663 "uuid": "3872737f-f4a3-5c77-a5bf-0fd008552f52", 00:15:02.663 "is_configured": true, 00:15:02.663 "data_offset": 2048, 00:15:02.663 "data_size": 63488 00:15:02.663 }, 00:15:02.663 { 00:15:02.663 "name": "BaseBdev2", 00:15:02.663 "uuid": "36f526a9-b5f0-54cf-a373-d46dd04803d1", 00:15:02.663 "is_configured": true, 00:15:02.663 "data_offset": 2048, 00:15:02.663 "data_size": 63488 00:15:02.663 }, 00:15:02.663 { 00:15:02.663 "name": "BaseBdev3", 00:15:02.663 "uuid": "1b357650-3fe4-52be-a8a6-7d0892145c9e", 00:15:02.663 "is_configured": true, 00:15:02.663 "data_offset": 2048, 00:15:02.663 "data_size": 63488 00:15:02.663 } 00:15:02.663 ] 00:15:02.663 }' 00:15:02.663 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.663 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.924 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:02.924 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.924 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.924 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:02.924 [2024-11-19 10:26:16.619815] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.924 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.924 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:02.924 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.924 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:02.924 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.924 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.924 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.924 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:02.924 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:02.924 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:02.924 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:02.924 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:02.924 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:02.924 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:02.924 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:02.924 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:02.924 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:02.924 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:02.924 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:02.924 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:02.924 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:03.193 [2024-11-19 10:26:16.871468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:03.193 /dev/nbd0 00:15:03.193 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:03.193 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:03.193 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:03.193 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:03.193 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:03.193 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:03.193 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:03.193 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:03.193 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:03.193 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:03.193 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:03.193 1+0 records in 00:15:03.193 1+0 records out 00:15:03.193 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423895 s, 9.7 MB/s 00:15:03.193 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.193 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:03.193 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.193 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:03.193 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:03.193 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:03.193 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:03.193 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:03.193 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:03.193 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:03.193 10:26:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:03.776 496+0 records in 00:15:03.776 496+0 records out 00:15:03.776 65011712 bytes (65 MB, 62 MiB) copied, 0.355372 s, 183 MB/s 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:03.776 [2024-11-19 10:26:17.517063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.776 [2024-11-19 10:26:17.531430] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.776 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.036 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.036 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.036 "name": "raid_bdev1", 00:15:04.036 "uuid": "7dad1f70-0144-40ba-b10c-6b1e22b0323d", 00:15:04.036 "strip_size_kb": 64, 00:15:04.036 "state": "online", 00:15:04.036 "raid_level": "raid5f", 00:15:04.036 "superblock": true, 00:15:04.036 "num_base_bdevs": 3, 00:15:04.036 "num_base_bdevs_discovered": 2, 00:15:04.036 "num_base_bdevs_operational": 2, 00:15:04.036 "base_bdevs_list": [ 00:15:04.036 { 00:15:04.036 "name": null, 00:15:04.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.036 "is_configured": false, 00:15:04.036 "data_offset": 0, 00:15:04.036 "data_size": 63488 00:15:04.036 }, 00:15:04.036 { 00:15:04.036 "name": "BaseBdev2", 00:15:04.036 "uuid": "36f526a9-b5f0-54cf-a373-d46dd04803d1", 00:15:04.036 "is_configured": true, 00:15:04.036 "data_offset": 2048, 00:15:04.036 "data_size": 63488 00:15:04.036 }, 00:15:04.036 { 00:15:04.036 "name": "BaseBdev3", 00:15:04.036 "uuid": "1b357650-3fe4-52be-a8a6-7d0892145c9e", 00:15:04.036 "is_configured": true, 00:15:04.036 "data_offset": 2048, 00:15:04.036 "data_size": 63488 00:15:04.036 } 00:15:04.036 ] 00:15:04.036 }' 00:15:04.036 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.036 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.296 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:04.296 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.296 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.296 [2024-11-19 10:26:17.950668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:04.296 [2024-11-19 10:26:17.966192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:15:04.296 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.296 10:26:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:04.296 [2024-11-19 10:26:17.972914] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:05.234 10:26:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:05.234 10:26:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.234 10:26:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:05.234 10:26:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:05.234 10:26:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.234 10:26:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.234 10:26:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.234 10:26:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.234 10:26:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.234 10:26:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.494 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.494 "name": "raid_bdev1", 00:15:05.494 "uuid": "7dad1f70-0144-40ba-b10c-6b1e22b0323d", 00:15:05.494 "strip_size_kb": 64, 00:15:05.494 "state": "online", 00:15:05.494 "raid_level": "raid5f", 00:15:05.494 "superblock": true, 00:15:05.494 "num_base_bdevs": 3, 00:15:05.494 "num_base_bdevs_discovered": 3, 00:15:05.494 "num_base_bdevs_operational": 3, 00:15:05.494 "process": { 00:15:05.494 "type": "rebuild", 00:15:05.494 "target": "spare", 00:15:05.494 "progress": { 00:15:05.494 "blocks": 20480, 00:15:05.494 "percent": 16 00:15:05.494 } 00:15:05.494 }, 00:15:05.494 "base_bdevs_list": [ 00:15:05.494 { 00:15:05.494 "name": "spare", 00:15:05.494 "uuid": "6d1b67ab-96ee-5957-87c1-d592944033da", 00:15:05.494 "is_configured": true, 00:15:05.494 "data_offset": 2048, 00:15:05.494 "data_size": 63488 00:15:05.494 }, 00:15:05.494 { 00:15:05.494 "name": "BaseBdev2", 00:15:05.494 "uuid": "36f526a9-b5f0-54cf-a373-d46dd04803d1", 00:15:05.494 "is_configured": true, 00:15:05.494 "data_offset": 2048, 00:15:05.494 "data_size": 63488 00:15:05.494 }, 00:15:05.494 { 00:15:05.494 "name": "BaseBdev3", 00:15:05.494 "uuid": "1b357650-3fe4-52be-a8a6-7d0892145c9e", 00:15:05.494 "is_configured": true, 00:15:05.494 "data_offset": 2048, 00:15:05.494 "data_size": 63488 00:15:05.494 } 00:15:05.494 ] 00:15:05.494 }' 00:15:05.494 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.494 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.494 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.494 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.494 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:05.494 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.494 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.494 [2024-11-19 10:26:19.115913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:05.494 [2024-11-19 10:26:19.180039] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:05.494 [2024-11-19 10:26:19.180092] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.494 [2024-11-19 10:26:19.180109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:05.494 [2024-11-19 10:26:19.180116] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:05.494 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.494 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:05.494 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.494 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.494 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.494 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.494 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:05.494 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.494 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.494 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.494 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.494 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.494 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.494 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.494 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.494 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.494 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.494 "name": "raid_bdev1", 00:15:05.494 "uuid": "7dad1f70-0144-40ba-b10c-6b1e22b0323d", 00:15:05.494 "strip_size_kb": 64, 00:15:05.494 "state": "online", 00:15:05.494 "raid_level": "raid5f", 00:15:05.494 "superblock": true, 00:15:05.494 "num_base_bdevs": 3, 00:15:05.494 "num_base_bdevs_discovered": 2, 00:15:05.494 "num_base_bdevs_operational": 2, 00:15:05.494 "base_bdevs_list": [ 00:15:05.494 { 00:15:05.494 "name": null, 00:15:05.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.494 "is_configured": false, 00:15:05.494 "data_offset": 0, 00:15:05.494 "data_size": 63488 00:15:05.494 }, 00:15:05.494 { 00:15:05.494 "name": "BaseBdev2", 00:15:05.494 "uuid": "36f526a9-b5f0-54cf-a373-d46dd04803d1", 00:15:05.494 "is_configured": true, 00:15:05.494 "data_offset": 2048, 00:15:05.494 "data_size": 63488 00:15:05.494 }, 00:15:05.494 { 00:15:05.494 "name": "BaseBdev3", 00:15:05.494 "uuid": "1b357650-3fe4-52be-a8a6-7d0892145c9e", 00:15:05.494 "is_configured": true, 00:15:05.494 "data_offset": 2048, 00:15:05.494 "data_size": 63488 00:15:05.494 } 00:15:05.494 ] 00:15:05.494 }' 00:15:05.494 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.494 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.064 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:06.064 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.064 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:06.064 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:06.064 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.064 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.064 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.064 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.064 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.064 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.064 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.064 "name": "raid_bdev1", 00:15:06.064 "uuid": "7dad1f70-0144-40ba-b10c-6b1e22b0323d", 00:15:06.064 "strip_size_kb": 64, 00:15:06.064 "state": "online", 00:15:06.064 "raid_level": "raid5f", 00:15:06.064 "superblock": true, 00:15:06.064 "num_base_bdevs": 3, 00:15:06.064 "num_base_bdevs_discovered": 2, 00:15:06.064 "num_base_bdevs_operational": 2, 00:15:06.064 "base_bdevs_list": [ 00:15:06.064 { 00:15:06.064 "name": null, 00:15:06.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.064 "is_configured": false, 00:15:06.064 "data_offset": 0, 00:15:06.064 "data_size": 63488 00:15:06.064 }, 00:15:06.064 { 00:15:06.064 "name": "BaseBdev2", 00:15:06.064 "uuid": "36f526a9-b5f0-54cf-a373-d46dd04803d1", 00:15:06.064 "is_configured": true, 00:15:06.064 "data_offset": 2048, 00:15:06.064 "data_size": 63488 00:15:06.064 }, 00:15:06.064 { 00:15:06.064 "name": "BaseBdev3", 00:15:06.064 "uuid": "1b357650-3fe4-52be-a8a6-7d0892145c9e", 00:15:06.064 "is_configured": true, 00:15:06.064 "data_offset": 2048, 00:15:06.064 "data_size": 63488 00:15:06.064 } 00:15:06.064 ] 00:15:06.064 }' 00:15:06.064 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.064 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:06.064 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.064 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:06.064 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:06.064 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.064 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.064 [2024-11-19 10:26:19.816864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:06.064 [2024-11-19 10:26:19.831758] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:15:06.064 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.064 10:26:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:06.064 [2024-11-19 10:26:19.838570] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:07.446 10:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.446 10:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.446 10:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.446 10:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.446 10:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.446 10:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.446 10:26:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.446 10:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.446 10:26:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.446 10:26:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.446 10:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.446 "name": "raid_bdev1", 00:15:07.446 "uuid": "7dad1f70-0144-40ba-b10c-6b1e22b0323d", 00:15:07.446 "strip_size_kb": 64, 00:15:07.446 "state": "online", 00:15:07.446 "raid_level": "raid5f", 00:15:07.446 "superblock": true, 00:15:07.446 "num_base_bdevs": 3, 00:15:07.446 "num_base_bdevs_discovered": 3, 00:15:07.446 "num_base_bdevs_operational": 3, 00:15:07.446 "process": { 00:15:07.446 "type": "rebuild", 00:15:07.446 "target": "spare", 00:15:07.446 "progress": { 00:15:07.446 "blocks": 20480, 00:15:07.446 "percent": 16 00:15:07.446 } 00:15:07.446 }, 00:15:07.446 "base_bdevs_list": [ 00:15:07.446 { 00:15:07.446 "name": "spare", 00:15:07.446 "uuid": "6d1b67ab-96ee-5957-87c1-d592944033da", 00:15:07.446 "is_configured": true, 00:15:07.446 "data_offset": 2048, 00:15:07.446 "data_size": 63488 00:15:07.446 }, 00:15:07.446 { 00:15:07.446 "name": "BaseBdev2", 00:15:07.446 "uuid": "36f526a9-b5f0-54cf-a373-d46dd04803d1", 00:15:07.446 "is_configured": true, 00:15:07.446 "data_offset": 2048, 00:15:07.446 "data_size": 63488 00:15:07.446 }, 00:15:07.446 { 00:15:07.446 "name": "BaseBdev3", 00:15:07.446 "uuid": "1b357650-3fe4-52be-a8a6-7d0892145c9e", 00:15:07.446 "is_configured": true, 00:15:07.446 "data_offset": 2048, 00:15:07.446 "data_size": 63488 00:15:07.446 } 00:15:07.446 ] 00:15:07.446 }' 00:15:07.446 10:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.446 10:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:07.446 10:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.446 10:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:07.446 10:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:07.446 10:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:07.446 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:07.446 10:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:07.446 10:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:07.446 10:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=545 00:15:07.446 10:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:07.446 10:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.446 10:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.446 10:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.446 10:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.446 10:26:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.446 10:26:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.446 10:26:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.446 10:26:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.446 10:26:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.446 10:26:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.446 10:26:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.446 "name": "raid_bdev1", 00:15:07.446 "uuid": "7dad1f70-0144-40ba-b10c-6b1e22b0323d", 00:15:07.446 "strip_size_kb": 64, 00:15:07.446 "state": "online", 00:15:07.446 "raid_level": "raid5f", 00:15:07.446 "superblock": true, 00:15:07.446 "num_base_bdevs": 3, 00:15:07.446 "num_base_bdevs_discovered": 3, 00:15:07.446 "num_base_bdevs_operational": 3, 00:15:07.446 "process": { 00:15:07.446 "type": "rebuild", 00:15:07.446 "target": "spare", 00:15:07.446 "progress": { 00:15:07.446 "blocks": 22528, 00:15:07.446 "percent": 17 00:15:07.446 } 00:15:07.446 }, 00:15:07.446 "base_bdevs_list": [ 00:15:07.446 { 00:15:07.446 "name": "spare", 00:15:07.446 "uuid": "6d1b67ab-96ee-5957-87c1-d592944033da", 00:15:07.446 "is_configured": true, 00:15:07.446 "data_offset": 2048, 00:15:07.446 "data_size": 63488 00:15:07.446 }, 00:15:07.446 { 00:15:07.446 "name": "BaseBdev2", 00:15:07.446 "uuid": "36f526a9-b5f0-54cf-a373-d46dd04803d1", 00:15:07.446 "is_configured": true, 00:15:07.446 "data_offset": 2048, 00:15:07.446 "data_size": 63488 00:15:07.446 }, 00:15:07.446 { 00:15:07.446 "name": "BaseBdev3", 00:15:07.446 "uuid": "1b357650-3fe4-52be-a8a6-7d0892145c9e", 00:15:07.446 "is_configured": true, 00:15:07.446 "data_offset": 2048, 00:15:07.446 "data_size": 63488 00:15:07.446 } 00:15:07.446 ] 00:15:07.446 }' 00:15:07.446 10:26:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.446 10:26:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:07.446 10:26:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.446 10:26:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:07.446 10:26:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:08.384 10:26:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:08.384 10:26:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.384 10:26:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.384 10:26:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.384 10:26:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.384 10:26:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.384 10:26:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.384 10:26:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.384 10:26:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.384 10:26:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.644 10:26:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.644 10:26:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.644 "name": "raid_bdev1", 00:15:08.644 "uuid": "7dad1f70-0144-40ba-b10c-6b1e22b0323d", 00:15:08.644 "strip_size_kb": 64, 00:15:08.644 "state": "online", 00:15:08.644 "raid_level": "raid5f", 00:15:08.644 "superblock": true, 00:15:08.644 "num_base_bdevs": 3, 00:15:08.644 "num_base_bdevs_discovered": 3, 00:15:08.644 "num_base_bdevs_operational": 3, 00:15:08.644 "process": { 00:15:08.644 "type": "rebuild", 00:15:08.644 "target": "spare", 00:15:08.644 "progress": { 00:15:08.644 "blocks": 47104, 00:15:08.644 "percent": 37 00:15:08.644 } 00:15:08.644 }, 00:15:08.644 "base_bdevs_list": [ 00:15:08.644 { 00:15:08.644 "name": "spare", 00:15:08.644 "uuid": "6d1b67ab-96ee-5957-87c1-d592944033da", 00:15:08.644 "is_configured": true, 00:15:08.644 "data_offset": 2048, 00:15:08.644 "data_size": 63488 00:15:08.644 }, 00:15:08.644 { 00:15:08.644 "name": "BaseBdev2", 00:15:08.644 "uuid": "36f526a9-b5f0-54cf-a373-d46dd04803d1", 00:15:08.644 "is_configured": true, 00:15:08.644 "data_offset": 2048, 00:15:08.644 "data_size": 63488 00:15:08.644 }, 00:15:08.644 { 00:15:08.644 "name": "BaseBdev3", 00:15:08.644 "uuid": "1b357650-3fe4-52be-a8a6-7d0892145c9e", 00:15:08.644 "is_configured": true, 00:15:08.644 "data_offset": 2048, 00:15:08.644 "data_size": 63488 00:15:08.644 } 00:15:08.644 ] 00:15:08.644 }' 00:15:08.644 10:26:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.644 10:26:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:08.644 10:26:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.644 10:26:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.644 10:26:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:09.584 10:26:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:09.584 10:26:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.584 10:26:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.584 10:26:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.584 10:26:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.584 10:26:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.584 10:26:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.584 10:26:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.584 10:26:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.584 10:26:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.584 10:26:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.584 10:26:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:09.584 "name": "raid_bdev1", 00:15:09.584 "uuid": "7dad1f70-0144-40ba-b10c-6b1e22b0323d", 00:15:09.584 "strip_size_kb": 64, 00:15:09.584 "state": "online", 00:15:09.584 "raid_level": "raid5f", 00:15:09.584 "superblock": true, 00:15:09.584 "num_base_bdevs": 3, 00:15:09.584 "num_base_bdevs_discovered": 3, 00:15:09.584 "num_base_bdevs_operational": 3, 00:15:09.584 "process": { 00:15:09.584 "type": "rebuild", 00:15:09.584 "target": "spare", 00:15:09.584 "progress": { 00:15:09.584 "blocks": 69632, 00:15:09.584 "percent": 54 00:15:09.584 } 00:15:09.584 }, 00:15:09.584 "base_bdevs_list": [ 00:15:09.584 { 00:15:09.584 "name": "spare", 00:15:09.584 "uuid": "6d1b67ab-96ee-5957-87c1-d592944033da", 00:15:09.584 "is_configured": true, 00:15:09.584 "data_offset": 2048, 00:15:09.584 "data_size": 63488 00:15:09.584 }, 00:15:09.584 { 00:15:09.584 "name": "BaseBdev2", 00:15:09.584 "uuid": "36f526a9-b5f0-54cf-a373-d46dd04803d1", 00:15:09.584 "is_configured": true, 00:15:09.584 "data_offset": 2048, 00:15:09.584 "data_size": 63488 00:15:09.584 }, 00:15:09.584 { 00:15:09.584 "name": "BaseBdev3", 00:15:09.584 "uuid": "1b357650-3fe4-52be-a8a6-7d0892145c9e", 00:15:09.584 "is_configured": true, 00:15:09.584 "data_offset": 2048, 00:15:09.584 "data_size": 63488 00:15:09.584 } 00:15:09.584 ] 00:15:09.584 }' 00:15:09.584 10:26:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.843 10:26:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.844 10:26:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.844 10:26:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.844 10:26:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:10.782 10:26:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:10.782 10:26:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.782 10:26:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.782 10:26:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:10.782 10:26:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:10.782 10:26:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.782 10:26:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.782 10:26:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.782 10:26:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.782 10:26:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.782 10:26:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.782 10:26:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.782 "name": "raid_bdev1", 00:15:10.782 "uuid": "7dad1f70-0144-40ba-b10c-6b1e22b0323d", 00:15:10.782 "strip_size_kb": 64, 00:15:10.782 "state": "online", 00:15:10.782 "raid_level": "raid5f", 00:15:10.782 "superblock": true, 00:15:10.782 "num_base_bdevs": 3, 00:15:10.782 "num_base_bdevs_discovered": 3, 00:15:10.782 "num_base_bdevs_operational": 3, 00:15:10.782 "process": { 00:15:10.782 "type": "rebuild", 00:15:10.782 "target": "spare", 00:15:10.782 "progress": { 00:15:10.782 "blocks": 92160, 00:15:10.782 "percent": 72 00:15:10.782 } 00:15:10.782 }, 00:15:10.782 "base_bdevs_list": [ 00:15:10.782 { 00:15:10.782 "name": "spare", 00:15:10.783 "uuid": "6d1b67ab-96ee-5957-87c1-d592944033da", 00:15:10.783 "is_configured": true, 00:15:10.783 "data_offset": 2048, 00:15:10.783 "data_size": 63488 00:15:10.783 }, 00:15:10.783 { 00:15:10.783 "name": "BaseBdev2", 00:15:10.783 "uuid": "36f526a9-b5f0-54cf-a373-d46dd04803d1", 00:15:10.783 "is_configured": true, 00:15:10.783 "data_offset": 2048, 00:15:10.783 "data_size": 63488 00:15:10.783 }, 00:15:10.783 { 00:15:10.783 "name": "BaseBdev3", 00:15:10.783 "uuid": "1b357650-3fe4-52be-a8a6-7d0892145c9e", 00:15:10.783 "is_configured": true, 00:15:10.783 "data_offset": 2048, 00:15:10.783 "data_size": 63488 00:15:10.783 } 00:15:10.783 ] 00:15:10.783 }' 00:15:10.783 10:26:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.783 10:26:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:10.783 10:26:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.042 10:26:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.042 10:26:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:11.982 10:26:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:11.982 10:26:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.982 10:26:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.982 10:26:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.982 10:26:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.982 10:26:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.982 10:26:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.982 10:26:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.982 10:26:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.982 10:26:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.982 10:26:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.982 10:26:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.982 "name": "raid_bdev1", 00:15:11.982 "uuid": "7dad1f70-0144-40ba-b10c-6b1e22b0323d", 00:15:11.982 "strip_size_kb": 64, 00:15:11.982 "state": "online", 00:15:11.982 "raid_level": "raid5f", 00:15:11.982 "superblock": true, 00:15:11.982 "num_base_bdevs": 3, 00:15:11.982 "num_base_bdevs_discovered": 3, 00:15:11.982 "num_base_bdevs_operational": 3, 00:15:11.982 "process": { 00:15:11.982 "type": "rebuild", 00:15:11.982 "target": "spare", 00:15:11.982 "progress": { 00:15:11.982 "blocks": 116736, 00:15:11.982 "percent": 91 00:15:11.982 } 00:15:11.982 }, 00:15:11.982 "base_bdevs_list": [ 00:15:11.982 { 00:15:11.982 "name": "spare", 00:15:11.982 "uuid": "6d1b67ab-96ee-5957-87c1-d592944033da", 00:15:11.982 "is_configured": true, 00:15:11.982 "data_offset": 2048, 00:15:11.982 "data_size": 63488 00:15:11.982 }, 00:15:11.982 { 00:15:11.982 "name": "BaseBdev2", 00:15:11.982 "uuid": "36f526a9-b5f0-54cf-a373-d46dd04803d1", 00:15:11.982 "is_configured": true, 00:15:11.982 "data_offset": 2048, 00:15:11.982 "data_size": 63488 00:15:11.982 }, 00:15:11.982 { 00:15:11.982 "name": "BaseBdev3", 00:15:11.982 "uuid": "1b357650-3fe4-52be-a8a6-7d0892145c9e", 00:15:11.982 "is_configured": true, 00:15:11.982 "data_offset": 2048, 00:15:11.982 "data_size": 63488 00:15:11.982 } 00:15:11.982 ] 00:15:11.982 }' 00:15:11.982 10:26:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.982 10:26:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.982 10:26:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.982 10:26:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.982 10:26:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:12.551 [2024-11-19 10:26:26.072577] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:12.551 [2024-11-19 10:26:26.072699] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:12.551 [2024-11-19 10:26:26.072836] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.119 10:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:13.119 10:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:13.119 10:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.119 10:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:13.119 10:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:13.119 10:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.119 10:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.119 10:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.119 10:26:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.119 10:26:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.119 10:26:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.119 10:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.119 "name": "raid_bdev1", 00:15:13.119 "uuid": "7dad1f70-0144-40ba-b10c-6b1e22b0323d", 00:15:13.119 "strip_size_kb": 64, 00:15:13.119 "state": "online", 00:15:13.119 "raid_level": "raid5f", 00:15:13.119 "superblock": true, 00:15:13.119 "num_base_bdevs": 3, 00:15:13.119 "num_base_bdevs_discovered": 3, 00:15:13.119 "num_base_bdevs_operational": 3, 00:15:13.119 "base_bdevs_list": [ 00:15:13.119 { 00:15:13.119 "name": "spare", 00:15:13.119 "uuid": "6d1b67ab-96ee-5957-87c1-d592944033da", 00:15:13.119 "is_configured": true, 00:15:13.119 "data_offset": 2048, 00:15:13.119 "data_size": 63488 00:15:13.119 }, 00:15:13.119 { 00:15:13.119 "name": "BaseBdev2", 00:15:13.119 "uuid": "36f526a9-b5f0-54cf-a373-d46dd04803d1", 00:15:13.119 "is_configured": true, 00:15:13.119 "data_offset": 2048, 00:15:13.119 "data_size": 63488 00:15:13.119 }, 00:15:13.119 { 00:15:13.119 "name": "BaseBdev3", 00:15:13.119 "uuid": "1b357650-3fe4-52be-a8a6-7d0892145c9e", 00:15:13.119 "is_configured": true, 00:15:13.119 "data_offset": 2048, 00:15:13.119 "data_size": 63488 00:15:13.119 } 00:15:13.119 ] 00:15:13.119 }' 00:15:13.119 10:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.119 10:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:13.119 10:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.119 10:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:13.120 10:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:13.120 10:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:13.120 10:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:13.120 10:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:13.120 10:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:13.120 10:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:13.120 10:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.120 10:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.120 10:26:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.120 10:26:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.120 10:26:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.379 10:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:13.379 "name": "raid_bdev1", 00:15:13.379 "uuid": "7dad1f70-0144-40ba-b10c-6b1e22b0323d", 00:15:13.379 "strip_size_kb": 64, 00:15:13.379 "state": "online", 00:15:13.379 "raid_level": "raid5f", 00:15:13.379 "superblock": true, 00:15:13.379 "num_base_bdevs": 3, 00:15:13.379 "num_base_bdevs_discovered": 3, 00:15:13.379 "num_base_bdevs_operational": 3, 00:15:13.379 "base_bdevs_list": [ 00:15:13.379 { 00:15:13.379 "name": "spare", 00:15:13.379 "uuid": "6d1b67ab-96ee-5957-87c1-d592944033da", 00:15:13.379 "is_configured": true, 00:15:13.379 "data_offset": 2048, 00:15:13.379 "data_size": 63488 00:15:13.379 }, 00:15:13.379 { 00:15:13.379 "name": "BaseBdev2", 00:15:13.379 "uuid": "36f526a9-b5f0-54cf-a373-d46dd04803d1", 00:15:13.379 "is_configured": true, 00:15:13.379 "data_offset": 2048, 00:15:13.379 "data_size": 63488 00:15:13.379 }, 00:15:13.379 { 00:15:13.379 "name": "BaseBdev3", 00:15:13.379 "uuid": "1b357650-3fe4-52be-a8a6-7d0892145c9e", 00:15:13.379 "is_configured": true, 00:15:13.379 "data_offset": 2048, 00:15:13.379 "data_size": 63488 00:15:13.379 } 00:15:13.379 ] 00:15:13.379 }' 00:15:13.379 10:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:13.379 10:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:13.379 10:26:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:13.379 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:13.379 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:13.379 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:13.379 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.379 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.379 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.379 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.379 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.379 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.379 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.379 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.379 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.379 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.379 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.379 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.379 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.379 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.379 "name": "raid_bdev1", 00:15:13.379 "uuid": "7dad1f70-0144-40ba-b10c-6b1e22b0323d", 00:15:13.379 "strip_size_kb": 64, 00:15:13.379 "state": "online", 00:15:13.379 "raid_level": "raid5f", 00:15:13.379 "superblock": true, 00:15:13.379 "num_base_bdevs": 3, 00:15:13.379 "num_base_bdevs_discovered": 3, 00:15:13.379 "num_base_bdevs_operational": 3, 00:15:13.380 "base_bdevs_list": [ 00:15:13.380 { 00:15:13.380 "name": "spare", 00:15:13.380 "uuid": "6d1b67ab-96ee-5957-87c1-d592944033da", 00:15:13.380 "is_configured": true, 00:15:13.380 "data_offset": 2048, 00:15:13.380 "data_size": 63488 00:15:13.380 }, 00:15:13.380 { 00:15:13.380 "name": "BaseBdev2", 00:15:13.380 "uuid": "36f526a9-b5f0-54cf-a373-d46dd04803d1", 00:15:13.380 "is_configured": true, 00:15:13.380 "data_offset": 2048, 00:15:13.380 "data_size": 63488 00:15:13.380 }, 00:15:13.380 { 00:15:13.380 "name": "BaseBdev3", 00:15:13.380 "uuid": "1b357650-3fe4-52be-a8a6-7d0892145c9e", 00:15:13.380 "is_configured": true, 00:15:13.380 "data_offset": 2048, 00:15:13.380 "data_size": 63488 00:15:13.380 } 00:15:13.380 ] 00:15:13.380 }' 00:15:13.380 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.380 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.947 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:13.947 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.947 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.947 [2024-11-19 10:26:27.451736] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:13.947 [2024-11-19 10:26:27.451807] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:13.947 [2024-11-19 10:26:27.451905] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:13.947 [2024-11-19 10:26:27.452024] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:13.947 [2024-11-19 10:26:27.452105] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:13.947 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.947 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:13.947 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.947 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.947 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.947 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.947 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:13.947 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:13.947 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:13.947 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:13.947 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:13.947 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:13.947 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:13.947 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:13.947 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:13.947 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:13.947 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:13.947 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:13.947 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:13.947 /dev/nbd0 00:15:13.947 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:13.947 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:13.947 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:13.947 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:13.947 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:13.947 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:13.947 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:14.207 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:14.208 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:14.208 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:14.208 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.208 1+0 records in 00:15:14.208 1+0 records out 00:15:14.208 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486357 s, 8.4 MB/s 00:15:14.208 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.208 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:14.208 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.208 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:14.208 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:14.208 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:14.208 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:14.208 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:14.208 /dev/nbd1 00:15:14.466 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:14.466 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:14.466 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:14.466 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:14.466 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:14.466 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:14.466 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:14.466 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:14.466 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:14.466 10:26:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:14.466 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.466 1+0 records in 00:15:14.466 1+0 records out 00:15:14.466 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000623413 s, 6.6 MB/s 00:15:14.466 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.466 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:14.466 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.466 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:14.466 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:14.466 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:14.466 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:14.466 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:14.466 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:14.466 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:14.466 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:14.466 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:14.466 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:14.466 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:14.467 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:14.726 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:14.726 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:14.726 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:14.726 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:14.726 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:14.726 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:14.726 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:14.726 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:14.726 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:14.726 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:14.990 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:14.990 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:14.990 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:14.990 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:14.990 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:14.990 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:14.990 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:14.990 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:14.990 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:14.990 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:14.990 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.990 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.990 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.990 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:14.990 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.990 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.990 [2024-11-19 10:26:28.649142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:14.990 [2024-11-19 10:26:28.649256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.990 [2024-11-19 10:26:28.649279] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:14.990 [2024-11-19 10:26:28.649290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.990 [2024-11-19 10:26:28.651563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.990 [2024-11-19 10:26:28.651606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:14.990 [2024-11-19 10:26:28.651696] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:14.990 [2024-11-19 10:26:28.651754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:14.990 [2024-11-19 10:26:28.651908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:14.990 [2024-11-19 10:26:28.652018] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:14.990 spare 00:15:14.990 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.990 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:14.990 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.990 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.990 [2024-11-19 10:26:28.751904] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:14.990 [2024-11-19 10:26:28.751971] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:14.990 [2024-11-19 10:26:28.752277] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:15:14.990 [2024-11-19 10:26:28.757515] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:14.990 [2024-11-19 10:26:28.757568] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:14.990 [2024-11-19 10:26:28.757769] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.990 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.990 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:14.990 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.990 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.990 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.990 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.990 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.990 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.990 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.990 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.990 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.260 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.260 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.260 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.260 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.260 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.260 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.260 "name": "raid_bdev1", 00:15:15.260 "uuid": "7dad1f70-0144-40ba-b10c-6b1e22b0323d", 00:15:15.260 "strip_size_kb": 64, 00:15:15.260 "state": "online", 00:15:15.260 "raid_level": "raid5f", 00:15:15.260 "superblock": true, 00:15:15.260 "num_base_bdevs": 3, 00:15:15.260 "num_base_bdevs_discovered": 3, 00:15:15.260 "num_base_bdevs_operational": 3, 00:15:15.260 "base_bdevs_list": [ 00:15:15.260 { 00:15:15.260 "name": "spare", 00:15:15.260 "uuid": "6d1b67ab-96ee-5957-87c1-d592944033da", 00:15:15.260 "is_configured": true, 00:15:15.261 "data_offset": 2048, 00:15:15.261 "data_size": 63488 00:15:15.261 }, 00:15:15.261 { 00:15:15.261 "name": "BaseBdev2", 00:15:15.261 "uuid": "36f526a9-b5f0-54cf-a373-d46dd04803d1", 00:15:15.261 "is_configured": true, 00:15:15.261 "data_offset": 2048, 00:15:15.261 "data_size": 63488 00:15:15.261 }, 00:15:15.261 { 00:15:15.261 "name": "BaseBdev3", 00:15:15.261 "uuid": "1b357650-3fe4-52be-a8a6-7d0892145c9e", 00:15:15.261 "is_configured": true, 00:15:15.261 "data_offset": 2048, 00:15:15.261 "data_size": 63488 00:15:15.261 } 00:15:15.261 ] 00:15:15.261 }' 00:15:15.261 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.261 10:26:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.533 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:15.533 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.533 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:15.533 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:15.533 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.533 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.533 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.533 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.533 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.533 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.533 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.533 "name": "raid_bdev1", 00:15:15.533 "uuid": "7dad1f70-0144-40ba-b10c-6b1e22b0323d", 00:15:15.533 "strip_size_kb": 64, 00:15:15.533 "state": "online", 00:15:15.533 "raid_level": "raid5f", 00:15:15.534 "superblock": true, 00:15:15.534 "num_base_bdevs": 3, 00:15:15.534 "num_base_bdevs_discovered": 3, 00:15:15.534 "num_base_bdevs_operational": 3, 00:15:15.534 "base_bdevs_list": [ 00:15:15.534 { 00:15:15.534 "name": "spare", 00:15:15.534 "uuid": "6d1b67ab-96ee-5957-87c1-d592944033da", 00:15:15.534 "is_configured": true, 00:15:15.534 "data_offset": 2048, 00:15:15.534 "data_size": 63488 00:15:15.534 }, 00:15:15.534 { 00:15:15.534 "name": "BaseBdev2", 00:15:15.534 "uuid": "36f526a9-b5f0-54cf-a373-d46dd04803d1", 00:15:15.534 "is_configured": true, 00:15:15.534 "data_offset": 2048, 00:15:15.534 "data_size": 63488 00:15:15.534 }, 00:15:15.534 { 00:15:15.534 "name": "BaseBdev3", 00:15:15.534 "uuid": "1b357650-3fe4-52be-a8a6-7d0892145c9e", 00:15:15.534 "is_configured": true, 00:15:15.534 "data_offset": 2048, 00:15:15.534 "data_size": 63488 00:15:15.534 } 00:15:15.534 ] 00:15:15.534 }' 00:15:15.534 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.534 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:15.534 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.798 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:15.798 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.798 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.798 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.798 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:15.798 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.798 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:15.798 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:15.798 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.798 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.798 [2024-11-19 10:26:29.406810] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:15.798 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.798 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:15.798 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.798 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.798 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.798 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.798 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:15.798 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.798 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.798 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.798 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.798 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.798 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.798 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.798 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.798 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.798 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.798 "name": "raid_bdev1", 00:15:15.798 "uuid": "7dad1f70-0144-40ba-b10c-6b1e22b0323d", 00:15:15.798 "strip_size_kb": 64, 00:15:15.798 "state": "online", 00:15:15.798 "raid_level": "raid5f", 00:15:15.798 "superblock": true, 00:15:15.798 "num_base_bdevs": 3, 00:15:15.798 "num_base_bdevs_discovered": 2, 00:15:15.798 "num_base_bdevs_operational": 2, 00:15:15.798 "base_bdevs_list": [ 00:15:15.798 { 00:15:15.798 "name": null, 00:15:15.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.798 "is_configured": false, 00:15:15.798 "data_offset": 0, 00:15:15.798 "data_size": 63488 00:15:15.798 }, 00:15:15.798 { 00:15:15.798 "name": "BaseBdev2", 00:15:15.798 "uuid": "36f526a9-b5f0-54cf-a373-d46dd04803d1", 00:15:15.798 "is_configured": true, 00:15:15.798 "data_offset": 2048, 00:15:15.798 "data_size": 63488 00:15:15.798 }, 00:15:15.798 { 00:15:15.798 "name": "BaseBdev3", 00:15:15.798 "uuid": "1b357650-3fe4-52be-a8a6-7d0892145c9e", 00:15:15.798 "is_configured": true, 00:15:15.798 "data_offset": 2048, 00:15:15.798 "data_size": 63488 00:15:15.798 } 00:15:15.798 ] 00:15:15.798 }' 00:15:15.798 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.798 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.366 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:16.366 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.366 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:16.366 [2024-11-19 10:26:29.850071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:16.366 [2024-11-19 10:26:29.850295] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:16.366 [2024-11-19 10:26:29.850359] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:16.366 [2024-11-19 10:26:29.850420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:16.366 [2024-11-19 10:26:29.865228] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:15:16.366 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.366 10:26:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:16.366 [2024-11-19 10:26:29.872488] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:17.305 10:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.305 10:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.305 10:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.305 10:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.305 10:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.305 10:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.305 10:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.305 10:26:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.305 10:26:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.305 10:26:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.305 10:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.305 "name": "raid_bdev1", 00:15:17.305 "uuid": "7dad1f70-0144-40ba-b10c-6b1e22b0323d", 00:15:17.305 "strip_size_kb": 64, 00:15:17.305 "state": "online", 00:15:17.305 "raid_level": "raid5f", 00:15:17.305 "superblock": true, 00:15:17.305 "num_base_bdevs": 3, 00:15:17.305 "num_base_bdevs_discovered": 3, 00:15:17.305 "num_base_bdevs_operational": 3, 00:15:17.305 "process": { 00:15:17.305 "type": "rebuild", 00:15:17.305 "target": "spare", 00:15:17.305 "progress": { 00:15:17.305 "blocks": 20480, 00:15:17.305 "percent": 16 00:15:17.305 } 00:15:17.305 }, 00:15:17.305 "base_bdevs_list": [ 00:15:17.305 { 00:15:17.305 "name": "spare", 00:15:17.305 "uuid": "6d1b67ab-96ee-5957-87c1-d592944033da", 00:15:17.305 "is_configured": true, 00:15:17.305 "data_offset": 2048, 00:15:17.305 "data_size": 63488 00:15:17.305 }, 00:15:17.305 { 00:15:17.305 "name": "BaseBdev2", 00:15:17.305 "uuid": "36f526a9-b5f0-54cf-a373-d46dd04803d1", 00:15:17.305 "is_configured": true, 00:15:17.305 "data_offset": 2048, 00:15:17.305 "data_size": 63488 00:15:17.305 }, 00:15:17.305 { 00:15:17.305 "name": "BaseBdev3", 00:15:17.305 "uuid": "1b357650-3fe4-52be-a8a6-7d0892145c9e", 00:15:17.305 "is_configured": true, 00:15:17.305 "data_offset": 2048, 00:15:17.305 "data_size": 63488 00:15:17.305 } 00:15:17.305 ] 00:15:17.305 }' 00:15:17.305 10:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.305 10:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:17.305 10:26:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.305 10:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.305 10:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:17.305 10:26:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.305 10:26:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.305 [2024-11-19 10:26:31.027556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:17.305 [2024-11-19 10:26:31.079596] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:17.305 [2024-11-19 10:26:31.079718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.305 [2024-11-19 10:26:31.079755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:17.305 [2024-11-19 10:26:31.079778] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:17.565 10:26:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.565 10:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:17.565 10:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.565 10:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.565 10:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.565 10:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.565 10:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:17.565 10:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.565 10:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.565 10:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.565 10:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.565 10:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.565 10:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.565 10:26:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.565 10:26:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.565 10:26:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.565 10:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.565 "name": "raid_bdev1", 00:15:17.565 "uuid": "7dad1f70-0144-40ba-b10c-6b1e22b0323d", 00:15:17.565 "strip_size_kb": 64, 00:15:17.565 "state": "online", 00:15:17.565 "raid_level": "raid5f", 00:15:17.565 "superblock": true, 00:15:17.565 "num_base_bdevs": 3, 00:15:17.565 "num_base_bdevs_discovered": 2, 00:15:17.565 "num_base_bdevs_operational": 2, 00:15:17.565 "base_bdevs_list": [ 00:15:17.565 { 00:15:17.565 "name": null, 00:15:17.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.565 "is_configured": false, 00:15:17.565 "data_offset": 0, 00:15:17.565 "data_size": 63488 00:15:17.565 }, 00:15:17.565 { 00:15:17.565 "name": "BaseBdev2", 00:15:17.565 "uuid": "36f526a9-b5f0-54cf-a373-d46dd04803d1", 00:15:17.565 "is_configured": true, 00:15:17.565 "data_offset": 2048, 00:15:17.565 "data_size": 63488 00:15:17.565 }, 00:15:17.565 { 00:15:17.565 "name": "BaseBdev3", 00:15:17.565 "uuid": "1b357650-3fe4-52be-a8a6-7d0892145c9e", 00:15:17.565 "is_configured": true, 00:15:17.565 "data_offset": 2048, 00:15:17.565 "data_size": 63488 00:15:17.565 } 00:15:17.565 ] 00:15:17.565 }' 00:15:17.565 10:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.566 10:26:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.826 10:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:17.826 10:26:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.826 10:26:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:17.826 [2024-11-19 10:26:31.575849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:17.826 [2024-11-19 10:26:31.575907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.826 [2024-11-19 10:26:31.575925] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:17.826 [2024-11-19 10:26:31.575938] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.826 [2024-11-19 10:26:31.576389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.826 [2024-11-19 10:26:31.576411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:17.826 [2024-11-19 10:26:31.576493] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:17.826 [2024-11-19 10:26:31.576506] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:17.826 [2024-11-19 10:26:31.576515] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:17.826 [2024-11-19 10:26:31.576536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:17.826 [2024-11-19 10:26:31.591617] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:15:17.826 spare 00:15:17.826 10:26:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.826 10:26:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:17.826 [2024-11-19 10:26:31.598701] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:19.209 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:19.209 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.209 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:19.209 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:19.209 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.209 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.209 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.209 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.209 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.209 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.209 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.209 "name": "raid_bdev1", 00:15:19.209 "uuid": "7dad1f70-0144-40ba-b10c-6b1e22b0323d", 00:15:19.209 "strip_size_kb": 64, 00:15:19.209 "state": "online", 00:15:19.209 "raid_level": "raid5f", 00:15:19.209 "superblock": true, 00:15:19.209 "num_base_bdevs": 3, 00:15:19.209 "num_base_bdevs_discovered": 3, 00:15:19.209 "num_base_bdevs_operational": 3, 00:15:19.209 "process": { 00:15:19.209 "type": "rebuild", 00:15:19.209 "target": "spare", 00:15:19.209 "progress": { 00:15:19.209 "blocks": 20480, 00:15:19.209 "percent": 16 00:15:19.209 } 00:15:19.209 }, 00:15:19.209 "base_bdevs_list": [ 00:15:19.209 { 00:15:19.209 "name": "spare", 00:15:19.209 "uuid": "6d1b67ab-96ee-5957-87c1-d592944033da", 00:15:19.209 "is_configured": true, 00:15:19.209 "data_offset": 2048, 00:15:19.209 "data_size": 63488 00:15:19.209 }, 00:15:19.209 { 00:15:19.210 "name": "BaseBdev2", 00:15:19.210 "uuid": "36f526a9-b5f0-54cf-a373-d46dd04803d1", 00:15:19.210 "is_configured": true, 00:15:19.210 "data_offset": 2048, 00:15:19.210 "data_size": 63488 00:15:19.210 }, 00:15:19.210 { 00:15:19.210 "name": "BaseBdev3", 00:15:19.210 "uuid": "1b357650-3fe4-52be-a8a6-7d0892145c9e", 00:15:19.210 "is_configured": true, 00:15:19.210 "data_offset": 2048, 00:15:19.210 "data_size": 63488 00:15:19.210 } 00:15:19.210 ] 00:15:19.210 }' 00:15:19.210 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.210 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:19.210 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.210 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:19.210 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:19.210 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.210 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.210 [2024-11-19 10:26:32.753804] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:19.210 [2024-11-19 10:26:32.805878] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:19.210 [2024-11-19 10:26:32.805999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.210 [2024-11-19 10:26:32.806040] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:19.210 [2024-11-19 10:26:32.806062] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:19.210 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.210 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:19.210 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.210 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.210 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.210 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.210 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:19.210 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.210 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.210 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.210 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.210 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.210 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.210 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.210 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.210 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.210 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.210 "name": "raid_bdev1", 00:15:19.210 "uuid": "7dad1f70-0144-40ba-b10c-6b1e22b0323d", 00:15:19.210 "strip_size_kb": 64, 00:15:19.210 "state": "online", 00:15:19.210 "raid_level": "raid5f", 00:15:19.210 "superblock": true, 00:15:19.210 "num_base_bdevs": 3, 00:15:19.210 "num_base_bdevs_discovered": 2, 00:15:19.210 "num_base_bdevs_operational": 2, 00:15:19.210 "base_bdevs_list": [ 00:15:19.210 { 00:15:19.210 "name": null, 00:15:19.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.210 "is_configured": false, 00:15:19.210 "data_offset": 0, 00:15:19.210 "data_size": 63488 00:15:19.210 }, 00:15:19.210 { 00:15:19.210 "name": "BaseBdev2", 00:15:19.210 "uuid": "36f526a9-b5f0-54cf-a373-d46dd04803d1", 00:15:19.210 "is_configured": true, 00:15:19.210 "data_offset": 2048, 00:15:19.210 "data_size": 63488 00:15:19.210 }, 00:15:19.210 { 00:15:19.210 "name": "BaseBdev3", 00:15:19.210 "uuid": "1b357650-3fe4-52be-a8a6-7d0892145c9e", 00:15:19.210 "is_configured": true, 00:15:19.210 "data_offset": 2048, 00:15:19.210 "data_size": 63488 00:15:19.210 } 00:15:19.210 ] 00:15:19.210 }' 00:15:19.210 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.210 10:26:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.470 10:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:19.470 10:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.470 10:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:19.470 10:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:19.470 10:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.470 10:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.470 10:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.470 10:26:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.470 10:26:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.730 10:26:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.730 10:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.730 "name": "raid_bdev1", 00:15:19.730 "uuid": "7dad1f70-0144-40ba-b10c-6b1e22b0323d", 00:15:19.730 "strip_size_kb": 64, 00:15:19.730 "state": "online", 00:15:19.730 "raid_level": "raid5f", 00:15:19.730 "superblock": true, 00:15:19.730 "num_base_bdevs": 3, 00:15:19.730 "num_base_bdevs_discovered": 2, 00:15:19.730 "num_base_bdevs_operational": 2, 00:15:19.730 "base_bdevs_list": [ 00:15:19.730 { 00:15:19.730 "name": null, 00:15:19.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.730 "is_configured": false, 00:15:19.730 "data_offset": 0, 00:15:19.730 "data_size": 63488 00:15:19.730 }, 00:15:19.730 { 00:15:19.730 "name": "BaseBdev2", 00:15:19.730 "uuid": "36f526a9-b5f0-54cf-a373-d46dd04803d1", 00:15:19.730 "is_configured": true, 00:15:19.730 "data_offset": 2048, 00:15:19.730 "data_size": 63488 00:15:19.730 }, 00:15:19.730 { 00:15:19.730 "name": "BaseBdev3", 00:15:19.730 "uuid": "1b357650-3fe4-52be-a8a6-7d0892145c9e", 00:15:19.730 "is_configured": true, 00:15:19.730 "data_offset": 2048, 00:15:19.730 "data_size": 63488 00:15:19.730 } 00:15:19.730 ] 00:15:19.730 }' 00:15:19.730 10:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.730 10:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:19.730 10:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.730 10:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:19.730 10:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:19.730 10:26:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.730 10:26:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.730 10:26:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.730 10:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:19.730 10:26:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.730 10:26:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:19.730 [2024-11-19 10:26:33.390544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:19.730 [2024-11-19 10:26:33.390639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.730 [2024-11-19 10:26:33.390695] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:19.730 [2024-11-19 10:26:33.390723] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.730 [2024-11-19 10:26:33.391193] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.730 [2024-11-19 10:26:33.391249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:19.730 [2024-11-19 10:26:33.391366] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:19.730 [2024-11-19 10:26:33.391409] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:19.731 [2024-11-19 10:26:33.391465] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:19.731 [2024-11-19 10:26:33.391519] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:19.731 BaseBdev1 00:15:19.731 10:26:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.731 10:26:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:20.670 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:20.670 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.670 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.670 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.670 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.670 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:20.670 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.670 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.670 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.670 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.670 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.670 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.670 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.670 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:20.670 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.931 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.931 "name": "raid_bdev1", 00:15:20.931 "uuid": "7dad1f70-0144-40ba-b10c-6b1e22b0323d", 00:15:20.931 "strip_size_kb": 64, 00:15:20.931 "state": "online", 00:15:20.931 "raid_level": "raid5f", 00:15:20.931 "superblock": true, 00:15:20.931 "num_base_bdevs": 3, 00:15:20.931 "num_base_bdevs_discovered": 2, 00:15:20.931 "num_base_bdevs_operational": 2, 00:15:20.931 "base_bdevs_list": [ 00:15:20.931 { 00:15:20.931 "name": null, 00:15:20.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.931 "is_configured": false, 00:15:20.931 "data_offset": 0, 00:15:20.931 "data_size": 63488 00:15:20.931 }, 00:15:20.931 { 00:15:20.931 "name": "BaseBdev2", 00:15:20.931 "uuid": "36f526a9-b5f0-54cf-a373-d46dd04803d1", 00:15:20.931 "is_configured": true, 00:15:20.931 "data_offset": 2048, 00:15:20.931 "data_size": 63488 00:15:20.931 }, 00:15:20.931 { 00:15:20.931 "name": "BaseBdev3", 00:15:20.931 "uuid": "1b357650-3fe4-52be-a8a6-7d0892145c9e", 00:15:20.931 "is_configured": true, 00:15:20.931 "data_offset": 2048, 00:15:20.931 "data_size": 63488 00:15:20.931 } 00:15:20.931 ] 00:15:20.931 }' 00:15:20.931 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.931 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.191 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:21.191 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.191 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:21.191 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:21.191 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.191 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.191 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.191 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.191 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.191 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.191 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.191 "name": "raid_bdev1", 00:15:21.191 "uuid": "7dad1f70-0144-40ba-b10c-6b1e22b0323d", 00:15:21.191 "strip_size_kb": 64, 00:15:21.191 "state": "online", 00:15:21.191 "raid_level": "raid5f", 00:15:21.191 "superblock": true, 00:15:21.191 "num_base_bdevs": 3, 00:15:21.191 "num_base_bdevs_discovered": 2, 00:15:21.191 "num_base_bdevs_operational": 2, 00:15:21.191 "base_bdevs_list": [ 00:15:21.191 { 00:15:21.191 "name": null, 00:15:21.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.191 "is_configured": false, 00:15:21.191 "data_offset": 0, 00:15:21.191 "data_size": 63488 00:15:21.191 }, 00:15:21.191 { 00:15:21.191 "name": "BaseBdev2", 00:15:21.191 "uuid": "36f526a9-b5f0-54cf-a373-d46dd04803d1", 00:15:21.191 "is_configured": true, 00:15:21.191 "data_offset": 2048, 00:15:21.191 "data_size": 63488 00:15:21.191 }, 00:15:21.191 { 00:15:21.191 "name": "BaseBdev3", 00:15:21.191 "uuid": "1b357650-3fe4-52be-a8a6-7d0892145c9e", 00:15:21.191 "is_configured": true, 00:15:21.191 "data_offset": 2048, 00:15:21.191 "data_size": 63488 00:15:21.191 } 00:15:21.191 ] 00:15:21.191 }' 00:15:21.191 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.191 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:21.191 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.451 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:21.451 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:21.451 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:21.451 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:21.451 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:21.451 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:21.451 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:21.451 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:21.451 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:21.451 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.451 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:21.451 [2024-11-19 10:26:34.987819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:21.451 [2024-11-19 10:26:34.988028] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:21.451 [2024-11-19 10:26:34.988095] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:21.451 request: 00:15:21.451 { 00:15:21.451 "base_bdev": "BaseBdev1", 00:15:21.451 "raid_bdev": "raid_bdev1", 00:15:21.451 "method": "bdev_raid_add_base_bdev", 00:15:21.451 "req_id": 1 00:15:21.451 } 00:15:21.451 Got JSON-RPC error response 00:15:21.451 response: 00:15:21.451 { 00:15:21.451 "code": -22, 00:15:21.451 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:21.451 } 00:15:21.451 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:21.451 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:21.451 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:21.451 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:21.451 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:21.451 10:26:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:22.390 10:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:22.390 10:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.390 10:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.390 10:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.390 10:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.390 10:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:22.390 10:26:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.390 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.390 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.390 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.390 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.390 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.391 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.391 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.391 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.391 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.391 "name": "raid_bdev1", 00:15:22.391 "uuid": "7dad1f70-0144-40ba-b10c-6b1e22b0323d", 00:15:22.391 "strip_size_kb": 64, 00:15:22.391 "state": "online", 00:15:22.391 "raid_level": "raid5f", 00:15:22.391 "superblock": true, 00:15:22.391 "num_base_bdevs": 3, 00:15:22.391 "num_base_bdevs_discovered": 2, 00:15:22.391 "num_base_bdevs_operational": 2, 00:15:22.391 "base_bdevs_list": [ 00:15:22.391 { 00:15:22.391 "name": null, 00:15:22.391 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.391 "is_configured": false, 00:15:22.391 "data_offset": 0, 00:15:22.391 "data_size": 63488 00:15:22.391 }, 00:15:22.391 { 00:15:22.391 "name": "BaseBdev2", 00:15:22.391 "uuid": "36f526a9-b5f0-54cf-a373-d46dd04803d1", 00:15:22.391 "is_configured": true, 00:15:22.391 "data_offset": 2048, 00:15:22.391 "data_size": 63488 00:15:22.391 }, 00:15:22.391 { 00:15:22.391 "name": "BaseBdev3", 00:15:22.391 "uuid": "1b357650-3fe4-52be-a8a6-7d0892145c9e", 00:15:22.391 "is_configured": true, 00:15:22.391 "data_offset": 2048, 00:15:22.391 "data_size": 63488 00:15:22.391 } 00:15:22.391 ] 00:15:22.391 }' 00:15:22.391 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.391 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.650 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:22.650 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.650 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:22.650 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:22.650 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.650 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.650 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.650 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.650 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:22.910 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.910 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.910 "name": "raid_bdev1", 00:15:22.910 "uuid": "7dad1f70-0144-40ba-b10c-6b1e22b0323d", 00:15:22.910 "strip_size_kb": 64, 00:15:22.910 "state": "online", 00:15:22.910 "raid_level": "raid5f", 00:15:22.910 "superblock": true, 00:15:22.910 "num_base_bdevs": 3, 00:15:22.910 "num_base_bdevs_discovered": 2, 00:15:22.910 "num_base_bdevs_operational": 2, 00:15:22.910 "base_bdevs_list": [ 00:15:22.910 { 00:15:22.910 "name": null, 00:15:22.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.910 "is_configured": false, 00:15:22.910 "data_offset": 0, 00:15:22.910 "data_size": 63488 00:15:22.910 }, 00:15:22.910 { 00:15:22.910 "name": "BaseBdev2", 00:15:22.910 "uuid": "36f526a9-b5f0-54cf-a373-d46dd04803d1", 00:15:22.910 "is_configured": true, 00:15:22.910 "data_offset": 2048, 00:15:22.910 "data_size": 63488 00:15:22.910 }, 00:15:22.910 { 00:15:22.910 "name": "BaseBdev3", 00:15:22.911 "uuid": "1b357650-3fe4-52be-a8a6-7d0892145c9e", 00:15:22.911 "is_configured": true, 00:15:22.911 "data_offset": 2048, 00:15:22.911 "data_size": 63488 00:15:22.911 } 00:15:22.911 ] 00:15:22.911 }' 00:15:22.911 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.911 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:22.911 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.911 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:22.911 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 81704 00:15:22.911 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81704 ']' 00:15:22.911 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 81704 00:15:22.911 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:22.911 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:22.911 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81704 00:15:22.911 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:22.911 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:22.911 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81704' 00:15:22.911 killing process with pid 81704 00:15:22.911 Received shutdown signal, test time was about 60.000000 seconds 00:15:22.911 00:15:22.911 Latency(us) 00:15:22.911 [2024-11-19T10:26:36.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:22.911 [2024-11-19T10:26:36.692Z] =================================================================================================================== 00:15:22.911 [2024-11-19T10:26:36.692Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:22.911 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 81704 00:15:22.911 [2024-11-19 10:26:36.581362] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:22.911 [2024-11-19 10:26:36.581481] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.911 [2024-11-19 10:26:36.581543] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:22.911 10:26:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 81704 00:15:22.911 [2024-11-19 10:26:36.581554] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:23.171 [2024-11-19 10:26:36.949251] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:24.554 10:26:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:24.554 00:15:24.554 real 0m22.984s 00:15:24.554 user 0m29.524s 00:15:24.554 sys 0m2.699s 00:15:24.554 10:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:24.554 10:26:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.554 ************************************ 00:15:24.554 END TEST raid5f_rebuild_test_sb 00:15:24.554 ************************************ 00:15:24.554 10:26:38 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:24.554 10:26:38 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:15:24.554 10:26:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:24.554 10:26:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:24.554 10:26:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:24.554 ************************************ 00:15:24.554 START TEST raid5f_state_function_test 00:15:24.554 ************************************ 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82452 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82452' 00:15:24.554 Process raid pid: 82452 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82452 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82452 ']' 00:15:24.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:24.554 10:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.554 [2024-11-19 10:26:38.138637] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:15:24.554 [2024-11-19 10:26:38.138813] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.554 [2024-11-19 10:26:38.309694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.813 [2024-11-19 10:26:38.421439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.072 [2024-11-19 10:26:38.599966] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.072 [2024-11-19 10:26:38.600017] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.332 10:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:25.332 10:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:25.332 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:25.332 10:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.332 10:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.332 [2024-11-19 10:26:38.968809] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:25.332 [2024-11-19 10:26:38.968903] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:25.332 [2024-11-19 10:26:38.968948] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:25.332 [2024-11-19 10:26:38.968972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:25.332 [2024-11-19 10:26:38.968991] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:25.332 [2024-11-19 10:26:38.969020] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:25.332 [2024-11-19 10:26:38.969039] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:25.332 [2024-11-19 10:26:38.969059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:25.332 10:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.332 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:25.332 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.332 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.332 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.332 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.332 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:25.332 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.332 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.332 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.332 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.332 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.332 10:26:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.332 10:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.332 10:26:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.332 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.332 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.332 "name": "Existed_Raid", 00:15:25.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.332 "strip_size_kb": 64, 00:15:25.332 "state": "configuring", 00:15:25.332 "raid_level": "raid5f", 00:15:25.332 "superblock": false, 00:15:25.332 "num_base_bdevs": 4, 00:15:25.332 "num_base_bdevs_discovered": 0, 00:15:25.332 "num_base_bdevs_operational": 4, 00:15:25.332 "base_bdevs_list": [ 00:15:25.332 { 00:15:25.332 "name": "BaseBdev1", 00:15:25.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.332 "is_configured": false, 00:15:25.332 "data_offset": 0, 00:15:25.332 "data_size": 0 00:15:25.332 }, 00:15:25.332 { 00:15:25.332 "name": "BaseBdev2", 00:15:25.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.332 "is_configured": false, 00:15:25.332 "data_offset": 0, 00:15:25.332 "data_size": 0 00:15:25.332 }, 00:15:25.332 { 00:15:25.332 "name": "BaseBdev3", 00:15:25.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.332 "is_configured": false, 00:15:25.332 "data_offset": 0, 00:15:25.332 "data_size": 0 00:15:25.332 }, 00:15:25.332 { 00:15:25.332 "name": "BaseBdev4", 00:15:25.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.332 "is_configured": false, 00:15:25.332 "data_offset": 0, 00:15:25.332 "data_size": 0 00:15:25.332 } 00:15:25.332 ] 00:15:25.332 }' 00:15:25.332 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.332 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.903 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:25.903 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.903 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.903 [2024-11-19 10:26:39.380090] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:25.903 [2024-11-19 10:26:39.380163] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:25.903 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.903 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:25.903 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.903 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.903 [2024-11-19 10:26:39.392079] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:25.903 [2024-11-19 10:26:39.392173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:25.903 [2024-11-19 10:26:39.392199] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:25.903 [2024-11-19 10:26:39.392221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:25.903 [2024-11-19 10:26:39.392239] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:25.903 [2024-11-19 10:26:39.392260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:25.903 [2024-11-19 10:26:39.392277] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:25.903 [2024-11-19 10:26:39.392297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:25.903 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.903 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:25.903 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.903 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.903 [2024-11-19 10:26:39.440113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:25.903 BaseBdev1 00:15:25.903 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.903 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:25.903 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:25.903 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:25.903 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:25.903 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:25.903 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:25.903 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:25.903 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.903 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.903 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.903 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:25.903 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.903 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.903 [ 00:15:25.903 { 00:15:25.903 "name": "BaseBdev1", 00:15:25.903 "aliases": [ 00:15:25.903 "798c4a3a-399c-4a83-8177-188e32ca6231" 00:15:25.903 ], 00:15:25.903 "product_name": "Malloc disk", 00:15:25.903 "block_size": 512, 00:15:25.903 "num_blocks": 65536, 00:15:25.903 "uuid": "798c4a3a-399c-4a83-8177-188e32ca6231", 00:15:25.903 "assigned_rate_limits": { 00:15:25.903 "rw_ios_per_sec": 0, 00:15:25.903 "rw_mbytes_per_sec": 0, 00:15:25.903 "r_mbytes_per_sec": 0, 00:15:25.903 "w_mbytes_per_sec": 0 00:15:25.903 }, 00:15:25.903 "claimed": true, 00:15:25.903 "claim_type": "exclusive_write", 00:15:25.903 "zoned": false, 00:15:25.903 "supported_io_types": { 00:15:25.903 "read": true, 00:15:25.903 "write": true, 00:15:25.903 "unmap": true, 00:15:25.903 "flush": true, 00:15:25.903 "reset": true, 00:15:25.903 "nvme_admin": false, 00:15:25.903 "nvme_io": false, 00:15:25.903 "nvme_io_md": false, 00:15:25.903 "write_zeroes": true, 00:15:25.903 "zcopy": true, 00:15:25.903 "get_zone_info": false, 00:15:25.903 "zone_management": false, 00:15:25.903 "zone_append": false, 00:15:25.903 "compare": false, 00:15:25.903 "compare_and_write": false, 00:15:25.903 "abort": true, 00:15:25.903 "seek_hole": false, 00:15:25.903 "seek_data": false, 00:15:25.903 "copy": true, 00:15:25.903 "nvme_iov_md": false 00:15:25.904 }, 00:15:25.904 "memory_domains": [ 00:15:25.904 { 00:15:25.904 "dma_device_id": "system", 00:15:25.904 "dma_device_type": 1 00:15:25.904 }, 00:15:25.904 { 00:15:25.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.904 "dma_device_type": 2 00:15:25.904 } 00:15:25.904 ], 00:15:25.904 "driver_specific": {} 00:15:25.904 } 00:15:25.904 ] 00:15:25.904 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.904 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:25.904 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:25.904 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.904 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.904 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.904 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.904 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:25.904 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.904 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.904 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.904 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.904 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.904 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.904 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.904 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.904 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.904 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.904 "name": "Existed_Raid", 00:15:25.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.904 "strip_size_kb": 64, 00:15:25.904 "state": "configuring", 00:15:25.904 "raid_level": "raid5f", 00:15:25.904 "superblock": false, 00:15:25.904 "num_base_bdevs": 4, 00:15:25.904 "num_base_bdevs_discovered": 1, 00:15:25.904 "num_base_bdevs_operational": 4, 00:15:25.904 "base_bdevs_list": [ 00:15:25.904 { 00:15:25.904 "name": "BaseBdev1", 00:15:25.904 "uuid": "798c4a3a-399c-4a83-8177-188e32ca6231", 00:15:25.904 "is_configured": true, 00:15:25.904 "data_offset": 0, 00:15:25.904 "data_size": 65536 00:15:25.904 }, 00:15:25.904 { 00:15:25.904 "name": "BaseBdev2", 00:15:25.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.904 "is_configured": false, 00:15:25.904 "data_offset": 0, 00:15:25.904 "data_size": 0 00:15:25.904 }, 00:15:25.904 { 00:15:25.904 "name": "BaseBdev3", 00:15:25.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.904 "is_configured": false, 00:15:25.904 "data_offset": 0, 00:15:25.904 "data_size": 0 00:15:25.904 }, 00:15:25.904 { 00:15:25.904 "name": "BaseBdev4", 00:15:25.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.904 "is_configured": false, 00:15:25.904 "data_offset": 0, 00:15:25.904 "data_size": 0 00:15:25.904 } 00:15:25.904 ] 00:15:25.904 }' 00:15:25.904 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.904 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.163 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:26.163 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.163 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.163 [2024-11-19 10:26:39.891392] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:26.163 [2024-11-19 10:26:39.891501] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:26.164 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.164 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:26.164 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.164 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.164 [2024-11-19 10:26:39.903422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:26.164 [2024-11-19 10:26:39.905260] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:26.164 [2024-11-19 10:26:39.905335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:26.164 [2024-11-19 10:26:39.905379] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:26.164 [2024-11-19 10:26:39.905405] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:26.164 [2024-11-19 10:26:39.905424] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:26.164 [2024-11-19 10:26:39.905445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:26.164 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.164 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:26.164 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:26.164 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:26.164 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.164 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.164 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.164 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.164 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:26.164 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.164 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.164 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.164 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.164 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.164 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.164 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.164 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.164 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.423 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.423 "name": "Existed_Raid", 00:15:26.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.423 "strip_size_kb": 64, 00:15:26.423 "state": "configuring", 00:15:26.423 "raid_level": "raid5f", 00:15:26.423 "superblock": false, 00:15:26.423 "num_base_bdevs": 4, 00:15:26.423 "num_base_bdevs_discovered": 1, 00:15:26.423 "num_base_bdevs_operational": 4, 00:15:26.423 "base_bdevs_list": [ 00:15:26.423 { 00:15:26.423 "name": "BaseBdev1", 00:15:26.423 "uuid": "798c4a3a-399c-4a83-8177-188e32ca6231", 00:15:26.423 "is_configured": true, 00:15:26.423 "data_offset": 0, 00:15:26.423 "data_size": 65536 00:15:26.423 }, 00:15:26.423 { 00:15:26.423 "name": "BaseBdev2", 00:15:26.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.423 "is_configured": false, 00:15:26.423 "data_offset": 0, 00:15:26.423 "data_size": 0 00:15:26.423 }, 00:15:26.423 { 00:15:26.423 "name": "BaseBdev3", 00:15:26.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.423 "is_configured": false, 00:15:26.423 "data_offset": 0, 00:15:26.423 "data_size": 0 00:15:26.423 }, 00:15:26.423 { 00:15:26.423 "name": "BaseBdev4", 00:15:26.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.423 "is_configured": false, 00:15:26.423 "data_offset": 0, 00:15:26.423 "data_size": 0 00:15:26.423 } 00:15:26.423 ] 00:15:26.423 }' 00:15:26.423 10:26:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.423 10:26:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.682 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:26.682 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.682 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.682 [2024-11-19 10:26:40.371033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:26.682 BaseBdev2 00:15:26.682 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.682 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:26.682 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:26.682 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:26.682 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:26.682 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:26.682 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:26.682 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:26.682 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.682 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.682 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.682 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:26.682 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.682 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.682 [ 00:15:26.682 { 00:15:26.682 "name": "BaseBdev2", 00:15:26.682 "aliases": [ 00:15:26.682 "d1da4f08-05b1-4d14-9fed-e5ff6d87cccc" 00:15:26.682 ], 00:15:26.682 "product_name": "Malloc disk", 00:15:26.682 "block_size": 512, 00:15:26.682 "num_blocks": 65536, 00:15:26.682 "uuid": "d1da4f08-05b1-4d14-9fed-e5ff6d87cccc", 00:15:26.682 "assigned_rate_limits": { 00:15:26.682 "rw_ios_per_sec": 0, 00:15:26.682 "rw_mbytes_per_sec": 0, 00:15:26.682 "r_mbytes_per_sec": 0, 00:15:26.682 "w_mbytes_per_sec": 0 00:15:26.682 }, 00:15:26.682 "claimed": true, 00:15:26.682 "claim_type": "exclusive_write", 00:15:26.682 "zoned": false, 00:15:26.683 "supported_io_types": { 00:15:26.683 "read": true, 00:15:26.683 "write": true, 00:15:26.683 "unmap": true, 00:15:26.683 "flush": true, 00:15:26.683 "reset": true, 00:15:26.683 "nvme_admin": false, 00:15:26.683 "nvme_io": false, 00:15:26.683 "nvme_io_md": false, 00:15:26.683 "write_zeroes": true, 00:15:26.683 "zcopy": true, 00:15:26.683 "get_zone_info": false, 00:15:26.683 "zone_management": false, 00:15:26.683 "zone_append": false, 00:15:26.683 "compare": false, 00:15:26.683 "compare_and_write": false, 00:15:26.683 "abort": true, 00:15:26.683 "seek_hole": false, 00:15:26.683 "seek_data": false, 00:15:26.683 "copy": true, 00:15:26.683 "nvme_iov_md": false 00:15:26.683 }, 00:15:26.683 "memory_domains": [ 00:15:26.683 { 00:15:26.683 "dma_device_id": "system", 00:15:26.683 "dma_device_type": 1 00:15:26.683 }, 00:15:26.683 { 00:15:26.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.683 "dma_device_type": 2 00:15:26.683 } 00:15:26.683 ], 00:15:26.683 "driver_specific": {} 00:15:26.683 } 00:15:26.683 ] 00:15:26.683 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.683 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:26.683 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:26.683 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:26.683 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:26.683 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.683 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.683 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.683 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.683 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:26.683 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.683 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.683 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.683 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.683 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.683 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.683 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.683 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.683 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.683 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.683 "name": "Existed_Raid", 00:15:26.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.683 "strip_size_kb": 64, 00:15:26.683 "state": "configuring", 00:15:26.683 "raid_level": "raid5f", 00:15:26.683 "superblock": false, 00:15:26.683 "num_base_bdevs": 4, 00:15:26.683 "num_base_bdevs_discovered": 2, 00:15:26.683 "num_base_bdevs_operational": 4, 00:15:26.683 "base_bdevs_list": [ 00:15:26.683 { 00:15:26.683 "name": "BaseBdev1", 00:15:26.683 "uuid": "798c4a3a-399c-4a83-8177-188e32ca6231", 00:15:26.683 "is_configured": true, 00:15:26.683 "data_offset": 0, 00:15:26.683 "data_size": 65536 00:15:26.683 }, 00:15:26.683 { 00:15:26.683 "name": "BaseBdev2", 00:15:26.683 "uuid": "d1da4f08-05b1-4d14-9fed-e5ff6d87cccc", 00:15:26.683 "is_configured": true, 00:15:26.683 "data_offset": 0, 00:15:26.683 "data_size": 65536 00:15:26.683 }, 00:15:26.683 { 00:15:26.683 "name": "BaseBdev3", 00:15:26.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.683 "is_configured": false, 00:15:26.683 "data_offset": 0, 00:15:26.683 "data_size": 0 00:15:26.683 }, 00:15:26.683 { 00:15:26.683 "name": "BaseBdev4", 00:15:26.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.683 "is_configured": false, 00:15:26.683 "data_offset": 0, 00:15:26.683 "data_size": 0 00:15:26.683 } 00:15:26.683 ] 00:15:26.683 }' 00:15:26.683 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.683 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.253 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:27.253 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.253 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.253 [2024-11-19 10:26:40.907750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:27.253 BaseBdev3 00:15:27.253 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.253 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:27.253 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:27.253 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:27.253 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:27.253 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:27.253 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:27.253 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:27.253 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.253 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.253 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.253 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:27.253 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.253 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.253 [ 00:15:27.253 { 00:15:27.253 "name": "BaseBdev3", 00:15:27.253 "aliases": [ 00:15:27.253 "d324b15a-9e55-4468-bed4-a3e7aa4b09af" 00:15:27.253 ], 00:15:27.253 "product_name": "Malloc disk", 00:15:27.253 "block_size": 512, 00:15:27.253 "num_blocks": 65536, 00:15:27.253 "uuid": "d324b15a-9e55-4468-bed4-a3e7aa4b09af", 00:15:27.253 "assigned_rate_limits": { 00:15:27.253 "rw_ios_per_sec": 0, 00:15:27.253 "rw_mbytes_per_sec": 0, 00:15:27.253 "r_mbytes_per_sec": 0, 00:15:27.253 "w_mbytes_per_sec": 0 00:15:27.253 }, 00:15:27.253 "claimed": true, 00:15:27.253 "claim_type": "exclusive_write", 00:15:27.253 "zoned": false, 00:15:27.253 "supported_io_types": { 00:15:27.253 "read": true, 00:15:27.253 "write": true, 00:15:27.253 "unmap": true, 00:15:27.253 "flush": true, 00:15:27.253 "reset": true, 00:15:27.253 "nvme_admin": false, 00:15:27.253 "nvme_io": false, 00:15:27.253 "nvme_io_md": false, 00:15:27.253 "write_zeroes": true, 00:15:27.253 "zcopy": true, 00:15:27.253 "get_zone_info": false, 00:15:27.253 "zone_management": false, 00:15:27.253 "zone_append": false, 00:15:27.253 "compare": false, 00:15:27.253 "compare_and_write": false, 00:15:27.253 "abort": true, 00:15:27.253 "seek_hole": false, 00:15:27.253 "seek_data": false, 00:15:27.253 "copy": true, 00:15:27.253 "nvme_iov_md": false 00:15:27.253 }, 00:15:27.253 "memory_domains": [ 00:15:27.253 { 00:15:27.253 "dma_device_id": "system", 00:15:27.253 "dma_device_type": 1 00:15:27.253 }, 00:15:27.253 { 00:15:27.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.253 "dma_device_type": 2 00:15:27.253 } 00:15:27.253 ], 00:15:27.253 "driver_specific": {} 00:15:27.253 } 00:15:27.253 ] 00:15:27.253 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.253 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:27.253 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:27.253 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:27.253 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:27.253 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.253 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.253 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.253 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.253 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:27.254 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.254 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.254 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.254 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.254 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.254 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.254 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.254 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.254 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.254 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.254 "name": "Existed_Raid", 00:15:27.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.254 "strip_size_kb": 64, 00:15:27.254 "state": "configuring", 00:15:27.254 "raid_level": "raid5f", 00:15:27.254 "superblock": false, 00:15:27.254 "num_base_bdevs": 4, 00:15:27.254 "num_base_bdevs_discovered": 3, 00:15:27.254 "num_base_bdevs_operational": 4, 00:15:27.254 "base_bdevs_list": [ 00:15:27.254 { 00:15:27.254 "name": "BaseBdev1", 00:15:27.254 "uuid": "798c4a3a-399c-4a83-8177-188e32ca6231", 00:15:27.254 "is_configured": true, 00:15:27.254 "data_offset": 0, 00:15:27.254 "data_size": 65536 00:15:27.254 }, 00:15:27.254 { 00:15:27.254 "name": "BaseBdev2", 00:15:27.254 "uuid": "d1da4f08-05b1-4d14-9fed-e5ff6d87cccc", 00:15:27.254 "is_configured": true, 00:15:27.254 "data_offset": 0, 00:15:27.254 "data_size": 65536 00:15:27.254 }, 00:15:27.254 { 00:15:27.254 "name": "BaseBdev3", 00:15:27.254 "uuid": "d324b15a-9e55-4468-bed4-a3e7aa4b09af", 00:15:27.254 "is_configured": true, 00:15:27.254 "data_offset": 0, 00:15:27.254 "data_size": 65536 00:15:27.254 }, 00:15:27.254 { 00:15:27.254 "name": "BaseBdev4", 00:15:27.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.254 "is_configured": false, 00:15:27.254 "data_offset": 0, 00:15:27.254 "data_size": 0 00:15:27.254 } 00:15:27.254 ] 00:15:27.254 }' 00:15:27.254 10:26:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.254 10:26:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.821 [2024-11-19 10:26:41.459439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:27.821 [2024-11-19 10:26:41.459589] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:27.821 [2024-11-19 10:26:41.459619] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:27.821 [2024-11-19 10:26:41.459915] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:27.821 BaseBdev4 00:15:27.821 [2024-11-19 10:26:41.466396] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:27.821 [2024-11-19 10:26:41.466418] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:27.821 [2024-11-19 10:26:41.466680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.821 [ 00:15:27.821 { 00:15:27.821 "name": "BaseBdev4", 00:15:27.821 "aliases": [ 00:15:27.821 "b15e81ef-5546-4c5c-b5e7-ffdcf02c379a" 00:15:27.821 ], 00:15:27.821 "product_name": "Malloc disk", 00:15:27.821 "block_size": 512, 00:15:27.821 "num_blocks": 65536, 00:15:27.821 "uuid": "b15e81ef-5546-4c5c-b5e7-ffdcf02c379a", 00:15:27.821 "assigned_rate_limits": { 00:15:27.821 "rw_ios_per_sec": 0, 00:15:27.821 "rw_mbytes_per_sec": 0, 00:15:27.821 "r_mbytes_per_sec": 0, 00:15:27.821 "w_mbytes_per_sec": 0 00:15:27.821 }, 00:15:27.821 "claimed": true, 00:15:27.821 "claim_type": "exclusive_write", 00:15:27.821 "zoned": false, 00:15:27.821 "supported_io_types": { 00:15:27.821 "read": true, 00:15:27.821 "write": true, 00:15:27.821 "unmap": true, 00:15:27.821 "flush": true, 00:15:27.821 "reset": true, 00:15:27.821 "nvme_admin": false, 00:15:27.821 "nvme_io": false, 00:15:27.821 "nvme_io_md": false, 00:15:27.821 "write_zeroes": true, 00:15:27.821 "zcopy": true, 00:15:27.821 "get_zone_info": false, 00:15:27.821 "zone_management": false, 00:15:27.821 "zone_append": false, 00:15:27.821 "compare": false, 00:15:27.821 "compare_and_write": false, 00:15:27.821 "abort": true, 00:15:27.821 "seek_hole": false, 00:15:27.821 "seek_data": false, 00:15:27.821 "copy": true, 00:15:27.821 "nvme_iov_md": false 00:15:27.821 }, 00:15:27.821 "memory_domains": [ 00:15:27.821 { 00:15:27.821 "dma_device_id": "system", 00:15:27.821 "dma_device_type": 1 00:15:27.821 }, 00:15:27.821 { 00:15:27.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.821 "dma_device_type": 2 00:15:27.821 } 00:15:27.821 ], 00:15:27.821 "driver_specific": {} 00:15:27.821 } 00:15:27.821 ] 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.821 10:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.822 10:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.822 10:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.822 10:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.822 "name": "Existed_Raid", 00:15:27.822 "uuid": "20592209-274b-4005-947e-81e27f22b332", 00:15:27.822 "strip_size_kb": 64, 00:15:27.822 "state": "online", 00:15:27.822 "raid_level": "raid5f", 00:15:27.822 "superblock": false, 00:15:27.822 "num_base_bdevs": 4, 00:15:27.822 "num_base_bdevs_discovered": 4, 00:15:27.822 "num_base_bdevs_operational": 4, 00:15:27.822 "base_bdevs_list": [ 00:15:27.822 { 00:15:27.822 "name": "BaseBdev1", 00:15:27.822 "uuid": "798c4a3a-399c-4a83-8177-188e32ca6231", 00:15:27.822 "is_configured": true, 00:15:27.822 "data_offset": 0, 00:15:27.822 "data_size": 65536 00:15:27.822 }, 00:15:27.822 { 00:15:27.822 "name": "BaseBdev2", 00:15:27.822 "uuid": "d1da4f08-05b1-4d14-9fed-e5ff6d87cccc", 00:15:27.822 "is_configured": true, 00:15:27.822 "data_offset": 0, 00:15:27.822 "data_size": 65536 00:15:27.822 }, 00:15:27.822 { 00:15:27.822 "name": "BaseBdev3", 00:15:27.822 "uuid": "d324b15a-9e55-4468-bed4-a3e7aa4b09af", 00:15:27.822 "is_configured": true, 00:15:27.822 "data_offset": 0, 00:15:27.822 "data_size": 65536 00:15:27.822 }, 00:15:27.822 { 00:15:27.822 "name": "BaseBdev4", 00:15:27.822 "uuid": "b15e81ef-5546-4c5c-b5e7-ffdcf02c379a", 00:15:27.822 "is_configured": true, 00:15:27.822 "data_offset": 0, 00:15:27.822 "data_size": 65536 00:15:27.822 } 00:15:27.822 ] 00:15:27.822 }' 00:15:27.822 10:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.822 10:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.418 10:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:28.418 10:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:28.418 10:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:28.418 10:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:28.418 10:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:28.418 10:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:28.418 10:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:28.418 10:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:28.418 10:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.418 10:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.418 [2024-11-19 10:26:41.942146] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:28.418 10:26:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.418 10:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:28.418 "name": "Existed_Raid", 00:15:28.418 "aliases": [ 00:15:28.418 "20592209-274b-4005-947e-81e27f22b332" 00:15:28.418 ], 00:15:28.418 "product_name": "Raid Volume", 00:15:28.418 "block_size": 512, 00:15:28.418 "num_blocks": 196608, 00:15:28.418 "uuid": "20592209-274b-4005-947e-81e27f22b332", 00:15:28.418 "assigned_rate_limits": { 00:15:28.418 "rw_ios_per_sec": 0, 00:15:28.418 "rw_mbytes_per_sec": 0, 00:15:28.418 "r_mbytes_per_sec": 0, 00:15:28.418 "w_mbytes_per_sec": 0 00:15:28.418 }, 00:15:28.418 "claimed": false, 00:15:28.418 "zoned": false, 00:15:28.418 "supported_io_types": { 00:15:28.418 "read": true, 00:15:28.418 "write": true, 00:15:28.418 "unmap": false, 00:15:28.418 "flush": false, 00:15:28.418 "reset": true, 00:15:28.418 "nvme_admin": false, 00:15:28.418 "nvme_io": false, 00:15:28.418 "nvme_io_md": false, 00:15:28.418 "write_zeroes": true, 00:15:28.418 "zcopy": false, 00:15:28.418 "get_zone_info": false, 00:15:28.418 "zone_management": false, 00:15:28.418 "zone_append": false, 00:15:28.418 "compare": false, 00:15:28.418 "compare_and_write": false, 00:15:28.418 "abort": false, 00:15:28.418 "seek_hole": false, 00:15:28.418 "seek_data": false, 00:15:28.418 "copy": false, 00:15:28.418 "nvme_iov_md": false 00:15:28.418 }, 00:15:28.418 "driver_specific": { 00:15:28.418 "raid": { 00:15:28.418 "uuid": "20592209-274b-4005-947e-81e27f22b332", 00:15:28.418 "strip_size_kb": 64, 00:15:28.418 "state": "online", 00:15:28.418 "raid_level": "raid5f", 00:15:28.418 "superblock": false, 00:15:28.418 "num_base_bdevs": 4, 00:15:28.418 "num_base_bdevs_discovered": 4, 00:15:28.418 "num_base_bdevs_operational": 4, 00:15:28.418 "base_bdevs_list": [ 00:15:28.418 { 00:15:28.418 "name": "BaseBdev1", 00:15:28.418 "uuid": "798c4a3a-399c-4a83-8177-188e32ca6231", 00:15:28.418 "is_configured": true, 00:15:28.418 "data_offset": 0, 00:15:28.418 "data_size": 65536 00:15:28.418 }, 00:15:28.418 { 00:15:28.418 "name": "BaseBdev2", 00:15:28.418 "uuid": "d1da4f08-05b1-4d14-9fed-e5ff6d87cccc", 00:15:28.418 "is_configured": true, 00:15:28.418 "data_offset": 0, 00:15:28.418 "data_size": 65536 00:15:28.418 }, 00:15:28.418 { 00:15:28.418 "name": "BaseBdev3", 00:15:28.418 "uuid": "d324b15a-9e55-4468-bed4-a3e7aa4b09af", 00:15:28.418 "is_configured": true, 00:15:28.418 "data_offset": 0, 00:15:28.419 "data_size": 65536 00:15:28.419 }, 00:15:28.419 { 00:15:28.419 "name": "BaseBdev4", 00:15:28.419 "uuid": "b15e81ef-5546-4c5c-b5e7-ffdcf02c379a", 00:15:28.419 "is_configured": true, 00:15:28.419 "data_offset": 0, 00:15:28.419 "data_size": 65536 00:15:28.419 } 00:15:28.419 ] 00:15:28.419 } 00:15:28.419 } 00:15:28.419 }' 00:15:28.419 10:26:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:28.419 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:28.419 BaseBdev2 00:15:28.419 BaseBdev3 00:15:28.419 BaseBdev4' 00:15:28.419 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.419 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:28.419 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.419 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:28.419 10:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.419 10:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.419 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.419 10:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.419 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.419 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.419 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.419 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:28.419 10:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.419 10:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.419 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.419 10:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.419 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.419 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.419 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.419 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:28.419 10:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.419 10:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.419 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.419 10:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.678 [2024-11-19 10:26:42.277368] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.678 "name": "Existed_Raid", 00:15:28.678 "uuid": "20592209-274b-4005-947e-81e27f22b332", 00:15:28.678 "strip_size_kb": 64, 00:15:28.678 "state": "online", 00:15:28.678 "raid_level": "raid5f", 00:15:28.678 "superblock": false, 00:15:28.678 "num_base_bdevs": 4, 00:15:28.678 "num_base_bdevs_discovered": 3, 00:15:28.678 "num_base_bdevs_operational": 3, 00:15:28.678 "base_bdevs_list": [ 00:15:28.678 { 00:15:28.678 "name": null, 00:15:28.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.678 "is_configured": false, 00:15:28.678 "data_offset": 0, 00:15:28.678 "data_size": 65536 00:15:28.678 }, 00:15:28.678 { 00:15:28.678 "name": "BaseBdev2", 00:15:28.678 "uuid": "d1da4f08-05b1-4d14-9fed-e5ff6d87cccc", 00:15:28.678 "is_configured": true, 00:15:28.678 "data_offset": 0, 00:15:28.678 "data_size": 65536 00:15:28.678 }, 00:15:28.678 { 00:15:28.678 "name": "BaseBdev3", 00:15:28.678 "uuid": "d324b15a-9e55-4468-bed4-a3e7aa4b09af", 00:15:28.678 "is_configured": true, 00:15:28.678 "data_offset": 0, 00:15:28.678 "data_size": 65536 00:15:28.678 }, 00:15:28.678 { 00:15:28.678 "name": "BaseBdev4", 00:15:28.678 "uuid": "b15e81ef-5546-4c5c-b5e7-ffdcf02c379a", 00:15:28.678 "is_configured": true, 00:15:28.678 "data_offset": 0, 00:15:28.678 "data_size": 65536 00:15:28.678 } 00:15:28.678 ] 00:15:28.678 }' 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.678 10:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.245 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:29.245 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:29.245 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.245 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:29.245 10:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.245 10:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.245 10:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.245 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:29.245 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:29.245 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:29.245 10:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.245 10:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.245 [2024-11-19 10:26:42.827358] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:29.245 [2024-11-19 10:26:42.827511] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:29.245 [2024-11-19 10:26:42.919669] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.245 10:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.245 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:29.245 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:29.245 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.245 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:29.245 10:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.245 10:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.245 10:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.245 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:29.245 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:29.245 10:26:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:29.245 10:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.245 10:26:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.245 [2024-11-19 10:26:42.975606] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:29.505 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.505 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:29.505 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:29.505 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.505 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:29.505 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.505 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.505 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.505 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:29.505 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:29.505 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:29.505 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.505 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.505 [2024-11-19 10:26:43.123600] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:29.505 [2024-11-19 10:26:43.123691] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:29.505 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.505 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:29.505 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:29.505 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.505 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.505 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.505 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:29.505 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.505 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:29.505 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:29.505 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:29.505 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:29.505 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:29.505 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:29.505 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.505 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.766 BaseBdev2 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.766 [ 00:15:29.766 { 00:15:29.766 "name": "BaseBdev2", 00:15:29.766 "aliases": [ 00:15:29.766 "e2550400-2dfa-4575-9e0d-6fc4c987cdf0" 00:15:29.766 ], 00:15:29.766 "product_name": "Malloc disk", 00:15:29.766 "block_size": 512, 00:15:29.766 "num_blocks": 65536, 00:15:29.766 "uuid": "e2550400-2dfa-4575-9e0d-6fc4c987cdf0", 00:15:29.766 "assigned_rate_limits": { 00:15:29.766 "rw_ios_per_sec": 0, 00:15:29.766 "rw_mbytes_per_sec": 0, 00:15:29.766 "r_mbytes_per_sec": 0, 00:15:29.766 "w_mbytes_per_sec": 0 00:15:29.766 }, 00:15:29.766 "claimed": false, 00:15:29.766 "zoned": false, 00:15:29.766 "supported_io_types": { 00:15:29.766 "read": true, 00:15:29.766 "write": true, 00:15:29.766 "unmap": true, 00:15:29.766 "flush": true, 00:15:29.766 "reset": true, 00:15:29.766 "nvme_admin": false, 00:15:29.766 "nvme_io": false, 00:15:29.766 "nvme_io_md": false, 00:15:29.766 "write_zeroes": true, 00:15:29.766 "zcopy": true, 00:15:29.766 "get_zone_info": false, 00:15:29.766 "zone_management": false, 00:15:29.766 "zone_append": false, 00:15:29.766 "compare": false, 00:15:29.766 "compare_and_write": false, 00:15:29.766 "abort": true, 00:15:29.766 "seek_hole": false, 00:15:29.766 "seek_data": false, 00:15:29.766 "copy": true, 00:15:29.766 "nvme_iov_md": false 00:15:29.766 }, 00:15:29.766 "memory_domains": [ 00:15:29.766 { 00:15:29.766 "dma_device_id": "system", 00:15:29.766 "dma_device_type": 1 00:15:29.766 }, 00:15:29.766 { 00:15:29.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.766 "dma_device_type": 2 00:15:29.766 } 00:15:29.766 ], 00:15:29.766 "driver_specific": {} 00:15:29.766 } 00:15:29.766 ] 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.766 BaseBdev3 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.766 [ 00:15:29.766 { 00:15:29.766 "name": "BaseBdev3", 00:15:29.766 "aliases": [ 00:15:29.766 "e9e7cb5b-9435-4af6-83a6-862e79ec3d6c" 00:15:29.766 ], 00:15:29.766 "product_name": "Malloc disk", 00:15:29.766 "block_size": 512, 00:15:29.766 "num_blocks": 65536, 00:15:29.766 "uuid": "e9e7cb5b-9435-4af6-83a6-862e79ec3d6c", 00:15:29.766 "assigned_rate_limits": { 00:15:29.766 "rw_ios_per_sec": 0, 00:15:29.766 "rw_mbytes_per_sec": 0, 00:15:29.766 "r_mbytes_per_sec": 0, 00:15:29.766 "w_mbytes_per_sec": 0 00:15:29.766 }, 00:15:29.766 "claimed": false, 00:15:29.766 "zoned": false, 00:15:29.766 "supported_io_types": { 00:15:29.766 "read": true, 00:15:29.766 "write": true, 00:15:29.766 "unmap": true, 00:15:29.766 "flush": true, 00:15:29.766 "reset": true, 00:15:29.766 "nvme_admin": false, 00:15:29.766 "nvme_io": false, 00:15:29.766 "nvme_io_md": false, 00:15:29.766 "write_zeroes": true, 00:15:29.766 "zcopy": true, 00:15:29.766 "get_zone_info": false, 00:15:29.766 "zone_management": false, 00:15:29.766 "zone_append": false, 00:15:29.766 "compare": false, 00:15:29.766 "compare_and_write": false, 00:15:29.766 "abort": true, 00:15:29.766 "seek_hole": false, 00:15:29.766 "seek_data": false, 00:15:29.766 "copy": true, 00:15:29.766 "nvme_iov_md": false 00:15:29.766 }, 00:15:29.766 "memory_domains": [ 00:15:29.766 { 00:15:29.766 "dma_device_id": "system", 00:15:29.766 "dma_device_type": 1 00:15:29.766 }, 00:15:29.766 { 00:15:29.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.766 "dma_device_type": 2 00:15:29.766 } 00:15:29.766 ], 00:15:29.766 "driver_specific": {} 00:15:29.766 } 00:15:29.766 ] 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.766 BaseBdev4 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:29.766 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:29.767 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.767 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.767 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.767 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:29.767 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.767 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.767 [ 00:15:29.767 { 00:15:29.767 "name": "BaseBdev4", 00:15:29.767 "aliases": [ 00:15:29.767 "0d2ff26d-47ce-4356-ad87-e4cc378f3dab" 00:15:29.767 ], 00:15:29.767 "product_name": "Malloc disk", 00:15:29.767 "block_size": 512, 00:15:29.767 "num_blocks": 65536, 00:15:29.767 "uuid": "0d2ff26d-47ce-4356-ad87-e4cc378f3dab", 00:15:29.767 "assigned_rate_limits": { 00:15:29.767 "rw_ios_per_sec": 0, 00:15:29.767 "rw_mbytes_per_sec": 0, 00:15:29.767 "r_mbytes_per_sec": 0, 00:15:29.767 "w_mbytes_per_sec": 0 00:15:29.767 }, 00:15:29.767 "claimed": false, 00:15:29.767 "zoned": false, 00:15:29.767 "supported_io_types": { 00:15:29.767 "read": true, 00:15:29.767 "write": true, 00:15:29.767 "unmap": true, 00:15:29.767 "flush": true, 00:15:29.767 "reset": true, 00:15:29.767 "nvme_admin": false, 00:15:29.767 "nvme_io": false, 00:15:29.767 "nvme_io_md": false, 00:15:29.767 "write_zeroes": true, 00:15:29.767 "zcopy": true, 00:15:29.767 "get_zone_info": false, 00:15:29.767 "zone_management": false, 00:15:29.767 "zone_append": false, 00:15:29.767 "compare": false, 00:15:29.767 "compare_and_write": false, 00:15:29.767 "abort": true, 00:15:29.767 "seek_hole": false, 00:15:29.767 "seek_data": false, 00:15:29.767 "copy": true, 00:15:29.767 "nvme_iov_md": false 00:15:29.767 }, 00:15:29.767 "memory_domains": [ 00:15:29.767 { 00:15:29.767 "dma_device_id": "system", 00:15:29.767 "dma_device_type": 1 00:15:29.767 }, 00:15:29.767 { 00:15:29.767 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.767 "dma_device_type": 2 00:15:29.767 } 00:15:29.767 ], 00:15:29.767 "driver_specific": {} 00:15:29.767 } 00:15:29.767 ] 00:15:29.767 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.767 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:29.767 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:29.767 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:29.767 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:29.767 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.767 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.767 [2024-11-19 10:26:43.491408] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:29.767 [2024-11-19 10:26:43.491497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:29.767 [2024-11-19 10:26:43.491537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:29.767 [2024-11-19 10:26:43.493247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:29.767 [2024-11-19 10:26:43.493295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:29.767 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.767 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:29.767 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.767 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.767 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.767 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.767 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:29.767 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.767 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.767 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.767 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.767 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.767 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.767 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.767 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.767 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.027 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.027 "name": "Existed_Raid", 00:15:30.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.027 "strip_size_kb": 64, 00:15:30.027 "state": "configuring", 00:15:30.027 "raid_level": "raid5f", 00:15:30.027 "superblock": false, 00:15:30.027 "num_base_bdevs": 4, 00:15:30.027 "num_base_bdevs_discovered": 3, 00:15:30.027 "num_base_bdevs_operational": 4, 00:15:30.027 "base_bdevs_list": [ 00:15:30.027 { 00:15:30.027 "name": "BaseBdev1", 00:15:30.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.027 "is_configured": false, 00:15:30.027 "data_offset": 0, 00:15:30.027 "data_size": 0 00:15:30.027 }, 00:15:30.027 { 00:15:30.027 "name": "BaseBdev2", 00:15:30.027 "uuid": "e2550400-2dfa-4575-9e0d-6fc4c987cdf0", 00:15:30.027 "is_configured": true, 00:15:30.027 "data_offset": 0, 00:15:30.027 "data_size": 65536 00:15:30.027 }, 00:15:30.027 { 00:15:30.027 "name": "BaseBdev3", 00:15:30.027 "uuid": "e9e7cb5b-9435-4af6-83a6-862e79ec3d6c", 00:15:30.027 "is_configured": true, 00:15:30.027 "data_offset": 0, 00:15:30.027 "data_size": 65536 00:15:30.027 }, 00:15:30.027 { 00:15:30.027 "name": "BaseBdev4", 00:15:30.027 "uuid": "0d2ff26d-47ce-4356-ad87-e4cc378f3dab", 00:15:30.027 "is_configured": true, 00:15:30.027 "data_offset": 0, 00:15:30.027 "data_size": 65536 00:15:30.027 } 00:15:30.027 ] 00:15:30.027 }' 00:15:30.027 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.027 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.287 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:30.287 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.287 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.287 [2024-11-19 10:26:43.954590] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:30.287 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.287 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:30.287 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.287 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.287 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.287 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.287 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:30.287 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.287 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.287 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.287 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.287 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.287 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.287 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.287 10:26:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.287 10:26:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.287 10:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.287 "name": "Existed_Raid", 00:15:30.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.287 "strip_size_kb": 64, 00:15:30.287 "state": "configuring", 00:15:30.287 "raid_level": "raid5f", 00:15:30.287 "superblock": false, 00:15:30.287 "num_base_bdevs": 4, 00:15:30.287 "num_base_bdevs_discovered": 2, 00:15:30.287 "num_base_bdevs_operational": 4, 00:15:30.287 "base_bdevs_list": [ 00:15:30.287 { 00:15:30.287 "name": "BaseBdev1", 00:15:30.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.287 "is_configured": false, 00:15:30.287 "data_offset": 0, 00:15:30.287 "data_size": 0 00:15:30.287 }, 00:15:30.287 { 00:15:30.287 "name": null, 00:15:30.287 "uuid": "e2550400-2dfa-4575-9e0d-6fc4c987cdf0", 00:15:30.287 "is_configured": false, 00:15:30.287 "data_offset": 0, 00:15:30.287 "data_size": 65536 00:15:30.287 }, 00:15:30.287 { 00:15:30.287 "name": "BaseBdev3", 00:15:30.287 "uuid": "e9e7cb5b-9435-4af6-83a6-862e79ec3d6c", 00:15:30.287 "is_configured": true, 00:15:30.287 "data_offset": 0, 00:15:30.287 "data_size": 65536 00:15:30.287 }, 00:15:30.287 { 00:15:30.287 "name": "BaseBdev4", 00:15:30.287 "uuid": "0d2ff26d-47ce-4356-ad87-e4cc378f3dab", 00:15:30.287 "is_configured": true, 00:15:30.287 "data_offset": 0, 00:15:30.287 "data_size": 65536 00:15:30.287 } 00:15:30.287 ] 00:15:30.287 }' 00:15:30.287 10:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.287 10:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.857 [2024-11-19 10:26:44.488274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:30.857 BaseBdev1 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.857 [ 00:15:30.857 { 00:15:30.857 "name": "BaseBdev1", 00:15:30.857 "aliases": [ 00:15:30.857 "4c7b37a4-f796-4dd4-993b-a1f04d68eaef" 00:15:30.857 ], 00:15:30.857 "product_name": "Malloc disk", 00:15:30.857 "block_size": 512, 00:15:30.857 "num_blocks": 65536, 00:15:30.857 "uuid": "4c7b37a4-f796-4dd4-993b-a1f04d68eaef", 00:15:30.857 "assigned_rate_limits": { 00:15:30.857 "rw_ios_per_sec": 0, 00:15:30.857 "rw_mbytes_per_sec": 0, 00:15:30.857 "r_mbytes_per_sec": 0, 00:15:30.857 "w_mbytes_per_sec": 0 00:15:30.857 }, 00:15:30.857 "claimed": true, 00:15:30.857 "claim_type": "exclusive_write", 00:15:30.857 "zoned": false, 00:15:30.857 "supported_io_types": { 00:15:30.857 "read": true, 00:15:30.857 "write": true, 00:15:30.857 "unmap": true, 00:15:30.857 "flush": true, 00:15:30.857 "reset": true, 00:15:30.857 "nvme_admin": false, 00:15:30.857 "nvme_io": false, 00:15:30.857 "nvme_io_md": false, 00:15:30.857 "write_zeroes": true, 00:15:30.857 "zcopy": true, 00:15:30.857 "get_zone_info": false, 00:15:30.857 "zone_management": false, 00:15:30.857 "zone_append": false, 00:15:30.857 "compare": false, 00:15:30.857 "compare_and_write": false, 00:15:30.857 "abort": true, 00:15:30.857 "seek_hole": false, 00:15:30.857 "seek_data": false, 00:15:30.857 "copy": true, 00:15:30.857 "nvme_iov_md": false 00:15:30.857 }, 00:15:30.857 "memory_domains": [ 00:15:30.857 { 00:15:30.857 "dma_device_id": "system", 00:15:30.857 "dma_device_type": 1 00:15:30.857 }, 00:15:30.857 { 00:15:30.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.857 "dma_device_type": 2 00:15:30.857 } 00:15:30.857 ], 00:15:30.857 "driver_specific": {} 00:15:30.857 } 00:15:30.857 ] 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.857 "name": "Existed_Raid", 00:15:30.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.857 "strip_size_kb": 64, 00:15:30.857 "state": "configuring", 00:15:30.857 "raid_level": "raid5f", 00:15:30.857 "superblock": false, 00:15:30.857 "num_base_bdevs": 4, 00:15:30.857 "num_base_bdevs_discovered": 3, 00:15:30.857 "num_base_bdevs_operational": 4, 00:15:30.857 "base_bdevs_list": [ 00:15:30.857 { 00:15:30.857 "name": "BaseBdev1", 00:15:30.857 "uuid": "4c7b37a4-f796-4dd4-993b-a1f04d68eaef", 00:15:30.857 "is_configured": true, 00:15:30.857 "data_offset": 0, 00:15:30.857 "data_size": 65536 00:15:30.857 }, 00:15:30.857 { 00:15:30.857 "name": null, 00:15:30.857 "uuid": "e2550400-2dfa-4575-9e0d-6fc4c987cdf0", 00:15:30.857 "is_configured": false, 00:15:30.857 "data_offset": 0, 00:15:30.857 "data_size": 65536 00:15:30.857 }, 00:15:30.857 { 00:15:30.857 "name": "BaseBdev3", 00:15:30.857 "uuid": "e9e7cb5b-9435-4af6-83a6-862e79ec3d6c", 00:15:30.857 "is_configured": true, 00:15:30.857 "data_offset": 0, 00:15:30.857 "data_size": 65536 00:15:30.857 }, 00:15:30.857 { 00:15:30.857 "name": "BaseBdev4", 00:15:30.857 "uuid": "0d2ff26d-47ce-4356-ad87-e4cc378f3dab", 00:15:30.857 "is_configured": true, 00:15:30.857 "data_offset": 0, 00:15:30.857 "data_size": 65536 00:15:30.857 } 00:15:30.857 ] 00:15:30.857 }' 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.857 10:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.427 10:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:31.427 10:26:44 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.427 10:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.427 10:26:44 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.427 10:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.427 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:31.427 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:31.427 10:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.427 10:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.427 [2024-11-19 10:26:45.055360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:31.427 10:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.427 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:31.427 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.427 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.427 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.427 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.427 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:31.427 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.427 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.427 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.427 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.427 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.427 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.427 10:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.427 10:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.427 10:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.427 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.427 "name": "Existed_Raid", 00:15:31.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.427 "strip_size_kb": 64, 00:15:31.427 "state": "configuring", 00:15:31.427 "raid_level": "raid5f", 00:15:31.427 "superblock": false, 00:15:31.427 "num_base_bdevs": 4, 00:15:31.427 "num_base_bdevs_discovered": 2, 00:15:31.427 "num_base_bdevs_operational": 4, 00:15:31.427 "base_bdevs_list": [ 00:15:31.427 { 00:15:31.427 "name": "BaseBdev1", 00:15:31.427 "uuid": "4c7b37a4-f796-4dd4-993b-a1f04d68eaef", 00:15:31.427 "is_configured": true, 00:15:31.427 "data_offset": 0, 00:15:31.427 "data_size": 65536 00:15:31.427 }, 00:15:31.427 { 00:15:31.427 "name": null, 00:15:31.427 "uuid": "e2550400-2dfa-4575-9e0d-6fc4c987cdf0", 00:15:31.427 "is_configured": false, 00:15:31.427 "data_offset": 0, 00:15:31.427 "data_size": 65536 00:15:31.427 }, 00:15:31.427 { 00:15:31.427 "name": null, 00:15:31.427 "uuid": "e9e7cb5b-9435-4af6-83a6-862e79ec3d6c", 00:15:31.427 "is_configured": false, 00:15:31.427 "data_offset": 0, 00:15:31.427 "data_size": 65536 00:15:31.427 }, 00:15:31.427 { 00:15:31.427 "name": "BaseBdev4", 00:15:31.427 "uuid": "0d2ff26d-47ce-4356-ad87-e4cc378f3dab", 00:15:31.427 "is_configured": true, 00:15:31.427 "data_offset": 0, 00:15:31.427 "data_size": 65536 00:15:31.427 } 00:15:31.427 ] 00:15:31.427 }' 00:15:31.427 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.427 10:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.997 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.997 10:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.997 10:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.997 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:31.997 10:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.997 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:31.997 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:31.997 10:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.997 10:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.997 [2024-11-19 10:26:45.598463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:31.997 10:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.997 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:31.997 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.997 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.997 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.997 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.997 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:31.997 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.997 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.997 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.997 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.997 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.997 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.997 10:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.997 10:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.997 10:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.997 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.997 "name": "Existed_Raid", 00:15:31.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:31.997 "strip_size_kb": 64, 00:15:31.997 "state": "configuring", 00:15:31.997 "raid_level": "raid5f", 00:15:31.997 "superblock": false, 00:15:31.997 "num_base_bdevs": 4, 00:15:31.997 "num_base_bdevs_discovered": 3, 00:15:31.997 "num_base_bdevs_operational": 4, 00:15:31.997 "base_bdevs_list": [ 00:15:31.997 { 00:15:31.997 "name": "BaseBdev1", 00:15:31.997 "uuid": "4c7b37a4-f796-4dd4-993b-a1f04d68eaef", 00:15:31.997 "is_configured": true, 00:15:31.997 "data_offset": 0, 00:15:31.997 "data_size": 65536 00:15:31.997 }, 00:15:31.997 { 00:15:31.997 "name": null, 00:15:31.997 "uuid": "e2550400-2dfa-4575-9e0d-6fc4c987cdf0", 00:15:31.997 "is_configured": false, 00:15:31.997 "data_offset": 0, 00:15:31.997 "data_size": 65536 00:15:31.997 }, 00:15:31.997 { 00:15:31.997 "name": "BaseBdev3", 00:15:31.997 "uuid": "e9e7cb5b-9435-4af6-83a6-862e79ec3d6c", 00:15:31.997 "is_configured": true, 00:15:31.997 "data_offset": 0, 00:15:31.997 "data_size": 65536 00:15:31.997 }, 00:15:31.997 { 00:15:31.997 "name": "BaseBdev4", 00:15:31.997 "uuid": "0d2ff26d-47ce-4356-ad87-e4cc378f3dab", 00:15:31.997 "is_configured": true, 00:15:31.997 "data_offset": 0, 00:15:31.997 "data_size": 65536 00:15:31.997 } 00:15:31.997 ] 00:15:31.997 }' 00:15:31.997 10:26:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.997 10:26:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.566 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:32.566 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.566 10:26:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.566 10:26:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.566 10:26:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.566 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:32.566 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:32.566 10:26:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.566 10:26:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.566 [2024-11-19 10:26:46.105624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:32.566 10:26:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.566 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:32.566 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.566 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.566 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.566 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.566 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:32.566 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.566 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.566 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.566 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.566 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.566 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.566 10:26:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.566 10:26:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.566 10:26:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.566 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.566 "name": "Existed_Raid", 00:15:32.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.566 "strip_size_kb": 64, 00:15:32.566 "state": "configuring", 00:15:32.566 "raid_level": "raid5f", 00:15:32.566 "superblock": false, 00:15:32.566 "num_base_bdevs": 4, 00:15:32.566 "num_base_bdevs_discovered": 2, 00:15:32.566 "num_base_bdevs_operational": 4, 00:15:32.566 "base_bdevs_list": [ 00:15:32.566 { 00:15:32.566 "name": null, 00:15:32.566 "uuid": "4c7b37a4-f796-4dd4-993b-a1f04d68eaef", 00:15:32.566 "is_configured": false, 00:15:32.566 "data_offset": 0, 00:15:32.566 "data_size": 65536 00:15:32.566 }, 00:15:32.566 { 00:15:32.566 "name": null, 00:15:32.566 "uuid": "e2550400-2dfa-4575-9e0d-6fc4c987cdf0", 00:15:32.566 "is_configured": false, 00:15:32.566 "data_offset": 0, 00:15:32.566 "data_size": 65536 00:15:32.566 }, 00:15:32.566 { 00:15:32.566 "name": "BaseBdev3", 00:15:32.566 "uuid": "e9e7cb5b-9435-4af6-83a6-862e79ec3d6c", 00:15:32.566 "is_configured": true, 00:15:32.566 "data_offset": 0, 00:15:32.566 "data_size": 65536 00:15:32.566 }, 00:15:32.566 { 00:15:32.566 "name": "BaseBdev4", 00:15:32.566 "uuid": "0d2ff26d-47ce-4356-ad87-e4cc378f3dab", 00:15:32.566 "is_configured": true, 00:15:32.566 "data_offset": 0, 00:15:32.566 "data_size": 65536 00:15:32.566 } 00:15:32.566 ] 00:15:32.566 }' 00:15:32.566 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.566 10:26:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.135 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.135 10:26:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.135 10:26:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.135 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:33.135 10:26:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.135 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:33.135 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:33.135 10:26:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.135 10:26:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.135 [2024-11-19 10:26:46.683031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:33.135 10:26:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.135 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:33.135 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.135 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:33.135 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.135 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.135 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:33.135 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.135 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.135 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.135 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.135 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.135 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.135 10:26:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.135 10:26:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.135 10:26:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.135 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.135 "name": "Existed_Raid", 00:15:33.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.135 "strip_size_kb": 64, 00:15:33.135 "state": "configuring", 00:15:33.135 "raid_level": "raid5f", 00:15:33.135 "superblock": false, 00:15:33.135 "num_base_bdevs": 4, 00:15:33.135 "num_base_bdevs_discovered": 3, 00:15:33.135 "num_base_bdevs_operational": 4, 00:15:33.135 "base_bdevs_list": [ 00:15:33.135 { 00:15:33.135 "name": null, 00:15:33.135 "uuid": "4c7b37a4-f796-4dd4-993b-a1f04d68eaef", 00:15:33.135 "is_configured": false, 00:15:33.135 "data_offset": 0, 00:15:33.135 "data_size": 65536 00:15:33.135 }, 00:15:33.135 { 00:15:33.135 "name": "BaseBdev2", 00:15:33.135 "uuid": "e2550400-2dfa-4575-9e0d-6fc4c987cdf0", 00:15:33.135 "is_configured": true, 00:15:33.135 "data_offset": 0, 00:15:33.135 "data_size": 65536 00:15:33.135 }, 00:15:33.135 { 00:15:33.135 "name": "BaseBdev3", 00:15:33.135 "uuid": "e9e7cb5b-9435-4af6-83a6-862e79ec3d6c", 00:15:33.135 "is_configured": true, 00:15:33.135 "data_offset": 0, 00:15:33.135 "data_size": 65536 00:15:33.135 }, 00:15:33.135 { 00:15:33.135 "name": "BaseBdev4", 00:15:33.135 "uuid": "0d2ff26d-47ce-4356-ad87-e4cc378f3dab", 00:15:33.135 "is_configured": true, 00:15:33.135 "data_offset": 0, 00:15:33.135 "data_size": 65536 00:15:33.135 } 00:15:33.135 ] 00:15:33.135 }' 00:15:33.135 10:26:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.135 10:26:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.395 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.395 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:33.395 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.395 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.395 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.395 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:33.395 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.395 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.395 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:33.395 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.654 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.654 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4c7b37a4-f796-4dd4-993b-a1f04d68eaef 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.655 [2024-11-19 10:26:47.236682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:33.655 [2024-11-19 10:26:47.236806] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:33.655 [2024-11-19 10:26:47.236831] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:33.655 [2024-11-19 10:26:47.237141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:33.655 [2024-11-19 10:26:47.243588] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:33.655 NewBaseBdev 00:15:33.655 [2024-11-19 10:26:47.243646] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:33.655 [2024-11-19 10:26:47.243928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.655 [ 00:15:33.655 { 00:15:33.655 "name": "NewBaseBdev", 00:15:33.655 "aliases": [ 00:15:33.655 "4c7b37a4-f796-4dd4-993b-a1f04d68eaef" 00:15:33.655 ], 00:15:33.655 "product_name": "Malloc disk", 00:15:33.655 "block_size": 512, 00:15:33.655 "num_blocks": 65536, 00:15:33.655 "uuid": "4c7b37a4-f796-4dd4-993b-a1f04d68eaef", 00:15:33.655 "assigned_rate_limits": { 00:15:33.655 "rw_ios_per_sec": 0, 00:15:33.655 "rw_mbytes_per_sec": 0, 00:15:33.655 "r_mbytes_per_sec": 0, 00:15:33.655 "w_mbytes_per_sec": 0 00:15:33.655 }, 00:15:33.655 "claimed": true, 00:15:33.655 "claim_type": "exclusive_write", 00:15:33.655 "zoned": false, 00:15:33.655 "supported_io_types": { 00:15:33.655 "read": true, 00:15:33.655 "write": true, 00:15:33.655 "unmap": true, 00:15:33.655 "flush": true, 00:15:33.655 "reset": true, 00:15:33.655 "nvme_admin": false, 00:15:33.655 "nvme_io": false, 00:15:33.655 "nvme_io_md": false, 00:15:33.655 "write_zeroes": true, 00:15:33.655 "zcopy": true, 00:15:33.655 "get_zone_info": false, 00:15:33.655 "zone_management": false, 00:15:33.655 "zone_append": false, 00:15:33.655 "compare": false, 00:15:33.655 "compare_and_write": false, 00:15:33.655 "abort": true, 00:15:33.655 "seek_hole": false, 00:15:33.655 "seek_data": false, 00:15:33.655 "copy": true, 00:15:33.655 "nvme_iov_md": false 00:15:33.655 }, 00:15:33.655 "memory_domains": [ 00:15:33.655 { 00:15:33.655 "dma_device_id": "system", 00:15:33.655 "dma_device_type": 1 00:15:33.655 }, 00:15:33.655 { 00:15:33.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.655 "dma_device_type": 2 00:15:33.655 } 00:15:33.655 ], 00:15:33.655 "driver_specific": {} 00:15:33.655 } 00:15:33.655 ] 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.655 "name": "Existed_Raid", 00:15:33.655 "uuid": "3fe1ea5f-7ff9-4b7c-af27-0544dfcdf3cb", 00:15:33.655 "strip_size_kb": 64, 00:15:33.655 "state": "online", 00:15:33.655 "raid_level": "raid5f", 00:15:33.655 "superblock": false, 00:15:33.655 "num_base_bdevs": 4, 00:15:33.655 "num_base_bdevs_discovered": 4, 00:15:33.655 "num_base_bdevs_operational": 4, 00:15:33.655 "base_bdevs_list": [ 00:15:33.655 { 00:15:33.655 "name": "NewBaseBdev", 00:15:33.655 "uuid": "4c7b37a4-f796-4dd4-993b-a1f04d68eaef", 00:15:33.655 "is_configured": true, 00:15:33.655 "data_offset": 0, 00:15:33.655 "data_size": 65536 00:15:33.655 }, 00:15:33.655 { 00:15:33.655 "name": "BaseBdev2", 00:15:33.655 "uuid": "e2550400-2dfa-4575-9e0d-6fc4c987cdf0", 00:15:33.655 "is_configured": true, 00:15:33.655 "data_offset": 0, 00:15:33.655 "data_size": 65536 00:15:33.655 }, 00:15:33.655 { 00:15:33.655 "name": "BaseBdev3", 00:15:33.655 "uuid": "e9e7cb5b-9435-4af6-83a6-862e79ec3d6c", 00:15:33.655 "is_configured": true, 00:15:33.655 "data_offset": 0, 00:15:33.655 "data_size": 65536 00:15:33.655 }, 00:15:33.655 { 00:15:33.655 "name": "BaseBdev4", 00:15:33.655 "uuid": "0d2ff26d-47ce-4356-ad87-e4cc378f3dab", 00:15:33.655 "is_configured": true, 00:15:33.655 "data_offset": 0, 00:15:33.655 "data_size": 65536 00:15:33.655 } 00:15:33.655 ] 00:15:33.655 }' 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.655 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.225 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:34.225 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:34.225 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:34.225 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:34.225 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:34.225 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:34.225 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:34.225 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:34.225 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.225 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.225 [2024-11-19 10:26:47.755313] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:34.225 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.225 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:34.225 "name": "Existed_Raid", 00:15:34.225 "aliases": [ 00:15:34.225 "3fe1ea5f-7ff9-4b7c-af27-0544dfcdf3cb" 00:15:34.225 ], 00:15:34.225 "product_name": "Raid Volume", 00:15:34.225 "block_size": 512, 00:15:34.225 "num_blocks": 196608, 00:15:34.225 "uuid": "3fe1ea5f-7ff9-4b7c-af27-0544dfcdf3cb", 00:15:34.225 "assigned_rate_limits": { 00:15:34.225 "rw_ios_per_sec": 0, 00:15:34.225 "rw_mbytes_per_sec": 0, 00:15:34.225 "r_mbytes_per_sec": 0, 00:15:34.225 "w_mbytes_per_sec": 0 00:15:34.225 }, 00:15:34.225 "claimed": false, 00:15:34.225 "zoned": false, 00:15:34.225 "supported_io_types": { 00:15:34.225 "read": true, 00:15:34.225 "write": true, 00:15:34.225 "unmap": false, 00:15:34.225 "flush": false, 00:15:34.225 "reset": true, 00:15:34.225 "nvme_admin": false, 00:15:34.225 "nvme_io": false, 00:15:34.225 "nvme_io_md": false, 00:15:34.225 "write_zeroes": true, 00:15:34.225 "zcopy": false, 00:15:34.225 "get_zone_info": false, 00:15:34.225 "zone_management": false, 00:15:34.225 "zone_append": false, 00:15:34.225 "compare": false, 00:15:34.225 "compare_and_write": false, 00:15:34.225 "abort": false, 00:15:34.225 "seek_hole": false, 00:15:34.225 "seek_data": false, 00:15:34.225 "copy": false, 00:15:34.225 "nvme_iov_md": false 00:15:34.225 }, 00:15:34.225 "driver_specific": { 00:15:34.225 "raid": { 00:15:34.225 "uuid": "3fe1ea5f-7ff9-4b7c-af27-0544dfcdf3cb", 00:15:34.225 "strip_size_kb": 64, 00:15:34.225 "state": "online", 00:15:34.225 "raid_level": "raid5f", 00:15:34.225 "superblock": false, 00:15:34.225 "num_base_bdevs": 4, 00:15:34.225 "num_base_bdevs_discovered": 4, 00:15:34.225 "num_base_bdevs_operational": 4, 00:15:34.225 "base_bdevs_list": [ 00:15:34.225 { 00:15:34.225 "name": "NewBaseBdev", 00:15:34.225 "uuid": "4c7b37a4-f796-4dd4-993b-a1f04d68eaef", 00:15:34.225 "is_configured": true, 00:15:34.225 "data_offset": 0, 00:15:34.225 "data_size": 65536 00:15:34.225 }, 00:15:34.225 { 00:15:34.225 "name": "BaseBdev2", 00:15:34.225 "uuid": "e2550400-2dfa-4575-9e0d-6fc4c987cdf0", 00:15:34.225 "is_configured": true, 00:15:34.225 "data_offset": 0, 00:15:34.225 "data_size": 65536 00:15:34.225 }, 00:15:34.225 { 00:15:34.225 "name": "BaseBdev3", 00:15:34.225 "uuid": "e9e7cb5b-9435-4af6-83a6-862e79ec3d6c", 00:15:34.226 "is_configured": true, 00:15:34.226 "data_offset": 0, 00:15:34.226 "data_size": 65536 00:15:34.226 }, 00:15:34.226 { 00:15:34.226 "name": "BaseBdev4", 00:15:34.226 "uuid": "0d2ff26d-47ce-4356-ad87-e4cc378f3dab", 00:15:34.226 "is_configured": true, 00:15:34.226 "data_offset": 0, 00:15:34.226 "data_size": 65536 00:15:34.226 } 00:15:34.226 ] 00:15:34.226 } 00:15:34.226 } 00:15:34.226 }' 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:34.226 BaseBdev2 00:15:34.226 BaseBdev3 00:15:34.226 BaseBdev4' 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.226 10:26:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:34.226 10:26:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.485 10:26:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:34.485 10:26:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:34.485 10:26:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:34.485 10:26:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.486 10:26:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.486 [2024-11-19 10:26:48.026573] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:34.486 [2024-11-19 10:26:48.026638] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:34.486 [2024-11-19 10:26:48.026740] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:34.486 [2024-11-19 10:26:48.027057] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:34.486 [2024-11-19 10:26:48.027110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:34.486 10:26:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.486 10:26:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82452 00:15:34.486 10:26:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82452 ']' 00:15:34.486 10:26:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82452 00:15:34.486 10:26:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:34.486 10:26:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:34.486 10:26:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82452 00:15:34.486 10:26:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:34.486 10:26:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:34.486 10:26:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82452' 00:15:34.486 killing process with pid 82452 00:15:34.486 10:26:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82452 00:15:34.486 [2024-11-19 10:26:48.074650] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:34.486 10:26:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82452 00:15:34.745 [2024-11-19 10:26:48.439541] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:36.128 00:15:36.128 real 0m11.433s 00:15:36.128 user 0m18.306s 00:15:36.128 sys 0m2.122s 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.128 ************************************ 00:15:36.128 END TEST raid5f_state_function_test 00:15:36.128 ************************************ 00:15:36.128 10:26:49 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:15:36.128 10:26:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:36.128 10:26:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:36.128 10:26:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:36.128 ************************************ 00:15:36.128 START TEST raid5f_state_function_test_sb 00:15:36.128 ************************************ 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:36.128 Process raid pid: 83118 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83118 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83118' 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83118 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83118 ']' 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:36.128 10:26:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.128 [2024-11-19 10:26:49.645560] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:15:36.128 [2024-11-19 10:26:49.645731] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.128 [2024-11-19 10:26:49.818408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.388 [2024-11-19 10:26:49.924528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.388 [2024-11-19 10:26:50.127248] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:36.388 [2024-11-19 10:26:50.127366] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:36.966 10:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:36.966 10:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:36.966 10:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:36.966 10:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.966 10:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.966 [2024-11-19 10:26:50.466326] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:36.966 [2024-11-19 10:26:50.466418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:36.966 [2024-11-19 10:26:50.466464] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:36.966 [2024-11-19 10:26:50.466487] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:36.966 [2024-11-19 10:26:50.466505] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:36.966 [2024-11-19 10:26:50.466525] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:36.966 [2024-11-19 10:26:50.466542] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:36.966 [2024-11-19 10:26:50.466562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:36.966 10:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.966 10:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:36.966 10:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:36.966 10:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:36.966 10:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.966 10:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.966 10:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:36.966 10:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.966 10:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.966 10:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.966 10:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.966 10:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.966 10:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.966 10:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.966 10:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.966 10:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.966 10:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.966 "name": "Existed_Raid", 00:15:36.966 "uuid": "9fd3a729-c00c-4df6-bbb7-65084d11d252", 00:15:36.966 "strip_size_kb": 64, 00:15:36.966 "state": "configuring", 00:15:36.966 "raid_level": "raid5f", 00:15:36.966 "superblock": true, 00:15:36.966 "num_base_bdevs": 4, 00:15:36.966 "num_base_bdevs_discovered": 0, 00:15:36.966 "num_base_bdevs_operational": 4, 00:15:36.966 "base_bdevs_list": [ 00:15:36.966 { 00:15:36.966 "name": "BaseBdev1", 00:15:36.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.966 "is_configured": false, 00:15:36.966 "data_offset": 0, 00:15:36.966 "data_size": 0 00:15:36.966 }, 00:15:36.966 { 00:15:36.966 "name": "BaseBdev2", 00:15:36.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.966 "is_configured": false, 00:15:36.966 "data_offset": 0, 00:15:36.966 "data_size": 0 00:15:36.966 }, 00:15:36.966 { 00:15:36.966 "name": "BaseBdev3", 00:15:36.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.966 "is_configured": false, 00:15:36.966 "data_offset": 0, 00:15:36.966 "data_size": 0 00:15:36.966 }, 00:15:36.966 { 00:15:36.966 "name": "BaseBdev4", 00:15:36.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.966 "is_configured": false, 00:15:36.966 "data_offset": 0, 00:15:36.966 "data_size": 0 00:15:36.966 } 00:15:36.966 ] 00:15:36.966 }' 00:15:36.966 10:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.966 10:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.226 10:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:37.226 10:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.226 10:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.226 [2024-11-19 10:26:50.945454] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:37.226 [2024-11-19 10:26:50.945531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:37.226 10:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.226 10:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:37.226 10:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.226 10:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.226 [2024-11-19 10:26:50.957436] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:37.226 [2024-11-19 10:26:50.957478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:37.226 [2024-11-19 10:26:50.957487] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:37.226 [2024-11-19 10:26:50.957496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:37.226 [2024-11-19 10:26:50.957502] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:37.226 [2024-11-19 10:26:50.957510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:37.226 [2024-11-19 10:26:50.957516] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:37.226 [2024-11-19 10:26:50.957524] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:37.226 10:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.226 10:26:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:37.226 10:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.226 10:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.226 [2024-11-19 10:26:50.999199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:37.226 BaseBdev1 00:15:37.226 10:26:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.226 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:37.226 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:37.226 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:37.226 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:37.226 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:37.226 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:37.226 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:37.226 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.226 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.486 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.487 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:37.487 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.487 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.487 [ 00:15:37.487 { 00:15:37.487 "name": "BaseBdev1", 00:15:37.487 "aliases": [ 00:15:37.487 "c34e5e60-2ce2-4b79-90f8-f3195062d339" 00:15:37.487 ], 00:15:37.487 "product_name": "Malloc disk", 00:15:37.487 "block_size": 512, 00:15:37.487 "num_blocks": 65536, 00:15:37.487 "uuid": "c34e5e60-2ce2-4b79-90f8-f3195062d339", 00:15:37.487 "assigned_rate_limits": { 00:15:37.487 "rw_ios_per_sec": 0, 00:15:37.487 "rw_mbytes_per_sec": 0, 00:15:37.487 "r_mbytes_per_sec": 0, 00:15:37.487 "w_mbytes_per_sec": 0 00:15:37.487 }, 00:15:37.487 "claimed": true, 00:15:37.487 "claim_type": "exclusive_write", 00:15:37.487 "zoned": false, 00:15:37.487 "supported_io_types": { 00:15:37.487 "read": true, 00:15:37.487 "write": true, 00:15:37.487 "unmap": true, 00:15:37.487 "flush": true, 00:15:37.487 "reset": true, 00:15:37.487 "nvme_admin": false, 00:15:37.487 "nvme_io": false, 00:15:37.487 "nvme_io_md": false, 00:15:37.487 "write_zeroes": true, 00:15:37.487 "zcopy": true, 00:15:37.487 "get_zone_info": false, 00:15:37.487 "zone_management": false, 00:15:37.487 "zone_append": false, 00:15:37.487 "compare": false, 00:15:37.487 "compare_and_write": false, 00:15:37.487 "abort": true, 00:15:37.487 "seek_hole": false, 00:15:37.487 "seek_data": false, 00:15:37.487 "copy": true, 00:15:37.487 "nvme_iov_md": false 00:15:37.487 }, 00:15:37.487 "memory_domains": [ 00:15:37.487 { 00:15:37.487 "dma_device_id": "system", 00:15:37.487 "dma_device_type": 1 00:15:37.487 }, 00:15:37.487 { 00:15:37.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.487 "dma_device_type": 2 00:15:37.487 } 00:15:37.487 ], 00:15:37.487 "driver_specific": {} 00:15:37.487 } 00:15:37.487 ] 00:15:37.487 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.487 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:37.487 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:37.487 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.487 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.487 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.487 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.487 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:37.487 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.487 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.487 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.487 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.487 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.487 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.487 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.487 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.487 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.487 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.487 "name": "Existed_Raid", 00:15:37.487 "uuid": "ec2a058e-feb1-496d-8178-67e7a2605697", 00:15:37.487 "strip_size_kb": 64, 00:15:37.487 "state": "configuring", 00:15:37.487 "raid_level": "raid5f", 00:15:37.487 "superblock": true, 00:15:37.487 "num_base_bdevs": 4, 00:15:37.487 "num_base_bdevs_discovered": 1, 00:15:37.487 "num_base_bdevs_operational": 4, 00:15:37.487 "base_bdevs_list": [ 00:15:37.487 { 00:15:37.487 "name": "BaseBdev1", 00:15:37.487 "uuid": "c34e5e60-2ce2-4b79-90f8-f3195062d339", 00:15:37.487 "is_configured": true, 00:15:37.487 "data_offset": 2048, 00:15:37.487 "data_size": 63488 00:15:37.487 }, 00:15:37.487 { 00:15:37.487 "name": "BaseBdev2", 00:15:37.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.487 "is_configured": false, 00:15:37.487 "data_offset": 0, 00:15:37.487 "data_size": 0 00:15:37.487 }, 00:15:37.487 { 00:15:37.487 "name": "BaseBdev3", 00:15:37.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.487 "is_configured": false, 00:15:37.487 "data_offset": 0, 00:15:37.487 "data_size": 0 00:15:37.487 }, 00:15:37.487 { 00:15:37.487 "name": "BaseBdev4", 00:15:37.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.487 "is_configured": false, 00:15:37.487 "data_offset": 0, 00:15:37.487 "data_size": 0 00:15:37.487 } 00:15:37.487 ] 00:15:37.487 }' 00:15:37.487 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.487 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.747 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:37.747 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.747 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.747 [2024-11-19 10:26:51.494375] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:37.747 [2024-11-19 10:26:51.494456] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:37.747 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.747 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:37.747 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.747 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.747 [2024-11-19 10:26:51.506409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:37.747 [2024-11-19 10:26:51.508215] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:37.747 [2024-11-19 10:26:51.508306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:37.747 [2024-11-19 10:26:51.508334] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:37.747 [2024-11-19 10:26:51.508358] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:37.747 [2024-11-19 10:26:51.508376] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:37.748 [2024-11-19 10:26:51.508396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:37.748 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.748 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:37.748 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:37.748 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:37.748 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:37.748 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.748 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.748 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.748 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:37.748 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.748 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.748 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.748 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.748 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.748 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.748 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.748 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.008 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.008 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.008 "name": "Existed_Raid", 00:15:38.008 "uuid": "10759c2a-5820-4ac9-bae9-5b6a7b1d9956", 00:15:38.008 "strip_size_kb": 64, 00:15:38.008 "state": "configuring", 00:15:38.008 "raid_level": "raid5f", 00:15:38.008 "superblock": true, 00:15:38.008 "num_base_bdevs": 4, 00:15:38.008 "num_base_bdevs_discovered": 1, 00:15:38.008 "num_base_bdevs_operational": 4, 00:15:38.008 "base_bdevs_list": [ 00:15:38.008 { 00:15:38.008 "name": "BaseBdev1", 00:15:38.008 "uuid": "c34e5e60-2ce2-4b79-90f8-f3195062d339", 00:15:38.008 "is_configured": true, 00:15:38.008 "data_offset": 2048, 00:15:38.008 "data_size": 63488 00:15:38.008 }, 00:15:38.008 { 00:15:38.008 "name": "BaseBdev2", 00:15:38.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.008 "is_configured": false, 00:15:38.008 "data_offset": 0, 00:15:38.008 "data_size": 0 00:15:38.008 }, 00:15:38.008 { 00:15:38.008 "name": "BaseBdev3", 00:15:38.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.008 "is_configured": false, 00:15:38.008 "data_offset": 0, 00:15:38.008 "data_size": 0 00:15:38.008 }, 00:15:38.008 { 00:15:38.008 "name": "BaseBdev4", 00:15:38.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.008 "is_configured": false, 00:15:38.008 "data_offset": 0, 00:15:38.008 "data_size": 0 00:15:38.008 } 00:15:38.008 ] 00:15:38.008 }' 00:15:38.008 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.008 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.268 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:38.268 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.268 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.268 [2024-11-19 10:26:51.951725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:38.268 BaseBdev2 00:15:38.268 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.268 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:38.268 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:38.268 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:38.268 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:38.268 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:38.268 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:38.268 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:38.268 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.268 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.268 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.268 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:38.269 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.269 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.269 [ 00:15:38.269 { 00:15:38.269 "name": "BaseBdev2", 00:15:38.269 "aliases": [ 00:15:38.269 "4eefdc94-ae67-4e01-a2e7-6cad5fafe463" 00:15:38.269 ], 00:15:38.269 "product_name": "Malloc disk", 00:15:38.269 "block_size": 512, 00:15:38.269 "num_blocks": 65536, 00:15:38.269 "uuid": "4eefdc94-ae67-4e01-a2e7-6cad5fafe463", 00:15:38.269 "assigned_rate_limits": { 00:15:38.269 "rw_ios_per_sec": 0, 00:15:38.269 "rw_mbytes_per_sec": 0, 00:15:38.269 "r_mbytes_per_sec": 0, 00:15:38.269 "w_mbytes_per_sec": 0 00:15:38.269 }, 00:15:38.269 "claimed": true, 00:15:38.269 "claim_type": "exclusive_write", 00:15:38.269 "zoned": false, 00:15:38.269 "supported_io_types": { 00:15:38.269 "read": true, 00:15:38.269 "write": true, 00:15:38.269 "unmap": true, 00:15:38.269 "flush": true, 00:15:38.269 "reset": true, 00:15:38.269 "nvme_admin": false, 00:15:38.269 "nvme_io": false, 00:15:38.269 "nvme_io_md": false, 00:15:38.269 "write_zeroes": true, 00:15:38.269 "zcopy": true, 00:15:38.269 "get_zone_info": false, 00:15:38.269 "zone_management": false, 00:15:38.269 "zone_append": false, 00:15:38.269 "compare": false, 00:15:38.269 "compare_and_write": false, 00:15:38.269 "abort": true, 00:15:38.269 "seek_hole": false, 00:15:38.269 "seek_data": false, 00:15:38.269 "copy": true, 00:15:38.269 "nvme_iov_md": false 00:15:38.269 }, 00:15:38.269 "memory_domains": [ 00:15:38.269 { 00:15:38.269 "dma_device_id": "system", 00:15:38.269 "dma_device_type": 1 00:15:38.269 }, 00:15:38.269 { 00:15:38.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.269 "dma_device_type": 2 00:15:38.269 } 00:15:38.269 ], 00:15:38.269 "driver_specific": {} 00:15:38.269 } 00:15:38.269 ] 00:15:38.269 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.269 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:38.269 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:38.269 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:38.269 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:38.269 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.269 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.269 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.269 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.269 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:38.269 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.269 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.269 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.269 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.269 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.269 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.269 10:26:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.269 10:26:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.269 10:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.269 10:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.269 "name": "Existed_Raid", 00:15:38.269 "uuid": "10759c2a-5820-4ac9-bae9-5b6a7b1d9956", 00:15:38.269 "strip_size_kb": 64, 00:15:38.269 "state": "configuring", 00:15:38.269 "raid_level": "raid5f", 00:15:38.269 "superblock": true, 00:15:38.269 "num_base_bdevs": 4, 00:15:38.269 "num_base_bdevs_discovered": 2, 00:15:38.269 "num_base_bdevs_operational": 4, 00:15:38.269 "base_bdevs_list": [ 00:15:38.269 { 00:15:38.269 "name": "BaseBdev1", 00:15:38.269 "uuid": "c34e5e60-2ce2-4b79-90f8-f3195062d339", 00:15:38.269 "is_configured": true, 00:15:38.269 "data_offset": 2048, 00:15:38.269 "data_size": 63488 00:15:38.269 }, 00:15:38.269 { 00:15:38.269 "name": "BaseBdev2", 00:15:38.269 "uuid": "4eefdc94-ae67-4e01-a2e7-6cad5fafe463", 00:15:38.269 "is_configured": true, 00:15:38.269 "data_offset": 2048, 00:15:38.269 "data_size": 63488 00:15:38.269 }, 00:15:38.269 { 00:15:38.269 "name": "BaseBdev3", 00:15:38.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.269 "is_configured": false, 00:15:38.269 "data_offset": 0, 00:15:38.269 "data_size": 0 00:15:38.269 }, 00:15:38.269 { 00:15:38.269 "name": "BaseBdev4", 00:15:38.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.269 "is_configured": false, 00:15:38.269 "data_offset": 0, 00:15:38.269 "data_size": 0 00:15:38.269 } 00:15:38.269 ] 00:15:38.269 }' 00:15:38.269 10:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.269 10:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.840 10:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:38.840 10:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.840 10:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.840 [2024-11-19 10:26:52.493712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:38.840 BaseBdev3 00:15:38.840 10:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.840 10:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:38.840 10:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:38.840 10:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:38.840 10:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:38.840 10:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:38.840 10:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:38.840 10:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:38.840 10:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.840 10:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.840 10:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.840 10:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:38.840 10:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.840 10:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.840 [ 00:15:38.840 { 00:15:38.840 "name": "BaseBdev3", 00:15:38.840 "aliases": [ 00:15:38.840 "239066e9-859f-426a-938f-891fb30ce9f1" 00:15:38.840 ], 00:15:38.840 "product_name": "Malloc disk", 00:15:38.840 "block_size": 512, 00:15:38.840 "num_blocks": 65536, 00:15:38.840 "uuid": "239066e9-859f-426a-938f-891fb30ce9f1", 00:15:38.840 "assigned_rate_limits": { 00:15:38.840 "rw_ios_per_sec": 0, 00:15:38.840 "rw_mbytes_per_sec": 0, 00:15:38.840 "r_mbytes_per_sec": 0, 00:15:38.840 "w_mbytes_per_sec": 0 00:15:38.840 }, 00:15:38.840 "claimed": true, 00:15:38.840 "claim_type": "exclusive_write", 00:15:38.840 "zoned": false, 00:15:38.840 "supported_io_types": { 00:15:38.840 "read": true, 00:15:38.840 "write": true, 00:15:38.840 "unmap": true, 00:15:38.840 "flush": true, 00:15:38.840 "reset": true, 00:15:38.840 "nvme_admin": false, 00:15:38.840 "nvme_io": false, 00:15:38.840 "nvme_io_md": false, 00:15:38.840 "write_zeroes": true, 00:15:38.840 "zcopy": true, 00:15:38.840 "get_zone_info": false, 00:15:38.840 "zone_management": false, 00:15:38.840 "zone_append": false, 00:15:38.840 "compare": false, 00:15:38.840 "compare_and_write": false, 00:15:38.840 "abort": true, 00:15:38.840 "seek_hole": false, 00:15:38.840 "seek_data": false, 00:15:38.840 "copy": true, 00:15:38.840 "nvme_iov_md": false 00:15:38.840 }, 00:15:38.840 "memory_domains": [ 00:15:38.840 { 00:15:38.840 "dma_device_id": "system", 00:15:38.840 "dma_device_type": 1 00:15:38.840 }, 00:15:38.840 { 00:15:38.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.840 "dma_device_type": 2 00:15:38.840 } 00:15:38.840 ], 00:15:38.840 "driver_specific": {} 00:15:38.840 } 00:15:38.840 ] 00:15:38.840 10:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.840 10:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:38.840 10:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:38.840 10:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:38.840 10:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:38.840 10:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:38.840 10:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:38.841 10:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.841 10:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.841 10:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:38.841 10:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.841 10:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.841 10:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.841 10:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.841 10:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.841 10:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.841 10:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.841 10:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.841 10:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.841 10:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.841 "name": "Existed_Raid", 00:15:38.841 "uuid": "10759c2a-5820-4ac9-bae9-5b6a7b1d9956", 00:15:38.841 "strip_size_kb": 64, 00:15:38.841 "state": "configuring", 00:15:38.841 "raid_level": "raid5f", 00:15:38.841 "superblock": true, 00:15:38.841 "num_base_bdevs": 4, 00:15:38.841 "num_base_bdevs_discovered": 3, 00:15:38.841 "num_base_bdevs_operational": 4, 00:15:38.841 "base_bdevs_list": [ 00:15:38.841 { 00:15:38.841 "name": "BaseBdev1", 00:15:38.841 "uuid": "c34e5e60-2ce2-4b79-90f8-f3195062d339", 00:15:38.841 "is_configured": true, 00:15:38.841 "data_offset": 2048, 00:15:38.841 "data_size": 63488 00:15:38.841 }, 00:15:38.841 { 00:15:38.841 "name": "BaseBdev2", 00:15:38.841 "uuid": "4eefdc94-ae67-4e01-a2e7-6cad5fafe463", 00:15:38.841 "is_configured": true, 00:15:38.841 "data_offset": 2048, 00:15:38.841 "data_size": 63488 00:15:38.841 }, 00:15:38.841 { 00:15:38.841 "name": "BaseBdev3", 00:15:38.841 "uuid": "239066e9-859f-426a-938f-891fb30ce9f1", 00:15:38.841 "is_configured": true, 00:15:38.841 "data_offset": 2048, 00:15:38.841 "data_size": 63488 00:15:38.841 }, 00:15:38.841 { 00:15:38.841 "name": "BaseBdev4", 00:15:38.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.841 "is_configured": false, 00:15:38.841 "data_offset": 0, 00:15:38.841 "data_size": 0 00:15:38.841 } 00:15:38.841 ] 00:15:38.841 }' 00:15:38.841 10:26:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.841 10:26:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.411 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:39.411 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.411 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.411 [2024-11-19 10:26:53.053328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:39.411 [2024-11-19 10:26:53.053667] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:39.411 [2024-11-19 10:26:53.053722] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:39.411 [2024-11-19 10:26:53.053980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:39.411 BaseBdev4 00:15:39.411 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.411 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:39.411 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:39.411 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:39.411 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:39.412 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:39.412 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:39.412 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:39.412 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.412 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.412 [2024-11-19 10:26:53.061199] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:39.412 [2024-11-19 10:26:53.061259] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:39.412 [2024-11-19 10:26:53.061557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.412 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.412 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:39.412 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.412 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.412 [ 00:15:39.412 { 00:15:39.412 "name": "BaseBdev4", 00:15:39.412 "aliases": [ 00:15:39.412 "e0682cff-bd74-4bd3-8388-47534a1d0148" 00:15:39.412 ], 00:15:39.412 "product_name": "Malloc disk", 00:15:39.412 "block_size": 512, 00:15:39.412 "num_blocks": 65536, 00:15:39.412 "uuid": "e0682cff-bd74-4bd3-8388-47534a1d0148", 00:15:39.412 "assigned_rate_limits": { 00:15:39.412 "rw_ios_per_sec": 0, 00:15:39.412 "rw_mbytes_per_sec": 0, 00:15:39.412 "r_mbytes_per_sec": 0, 00:15:39.412 "w_mbytes_per_sec": 0 00:15:39.412 }, 00:15:39.412 "claimed": true, 00:15:39.412 "claim_type": "exclusive_write", 00:15:39.412 "zoned": false, 00:15:39.412 "supported_io_types": { 00:15:39.412 "read": true, 00:15:39.412 "write": true, 00:15:39.412 "unmap": true, 00:15:39.412 "flush": true, 00:15:39.412 "reset": true, 00:15:39.412 "nvme_admin": false, 00:15:39.412 "nvme_io": false, 00:15:39.412 "nvme_io_md": false, 00:15:39.412 "write_zeroes": true, 00:15:39.412 "zcopy": true, 00:15:39.412 "get_zone_info": false, 00:15:39.412 "zone_management": false, 00:15:39.412 "zone_append": false, 00:15:39.412 "compare": false, 00:15:39.412 "compare_and_write": false, 00:15:39.412 "abort": true, 00:15:39.412 "seek_hole": false, 00:15:39.412 "seek_data": false, 00:15:39.412 "copy": true, 00:15:39.412 "nvme_iov_md": false 00:15:39.412 }, 00:15:39.412 "memory_domains": [ 00:15:39.412 { 00:15:39.412 "dma_device_id": "system", 00:15:39.412 "dma_device_type": 1 00:15:39.412 }, 00:15:39.412 { 00:15:39.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.412 "dma_device_type": 2 00:15:39.412 } 00:15:39.412 ], 00:15:39.412 "driver_specific": {} 00:15:39.412 } 00:15:39.412 ] 00:15:39.412 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.412 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:39.412 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:39.412 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:39.412 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:39.412 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.412 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.412 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.412 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.412 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:39.412 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.412 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.412 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.412 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.412 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.412 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.412 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.412 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.412 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.412 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.412 "name": "Existed_Raid", 00:15:39.412 "uuid": "10759c2a-5820-4ac9-bae9-5b6a7b1d9956", 00:15:39.412 "strip_size_kb": 64, 00:15:39.412 "state": "online", 00:15:39.412 "raid_level": "raid5f", 00:15:39.412 "superblock": true, 00:15:39.412 "num_base_bdevs": 4, 00:15:39.412 "num_base_bdevs_discovered": 4, 00:15:39.412 "num_base_bdevs_operational": 4, 00:15:39.412 "base_bdevs_list": [ 00:15:39.412 { 00:15:39.412 "name": "BaseBdev1", 00:15:39.412 "uuid": "c34e5e60-2ce2-4b79-90f8-f3195062d339", 00:15:39.412 "is_configured": true, 00:15:39.412 "data_offset": 2048, 00:15:39.412 "data_size": 63488 00:15:39.412 }, 00:15:39.412 { 00:15:39.412 "name": "BaseBdev2", 00:15:39.412 "uuid": "4eefdc94-ae67-4e01-a2e7-6cad5fafe463", 00:15:39.412 "is_configured": true, 00:15:39.412 "data_offset": 2048, 00:15:39.412 "data_size": 63488 00:15:39.412 }, 00:15:39.412 { 00:15:39.412 "name": "BaseBdev3", 00:15:39.412 "uuid": "239066e9-859f-426a-938f-891fb30ce9f1", 00:15:39.412 "is_configured": true, 00:15:39.412 "data_offset": 2048, 00:15:39.412 "data_size": 63488 00:15:39.412 }, 00:15:39.412 { 00:15:39.412 "name": "BaseBdev4", 00:15:39.412 "uuid": "e0682cff-bd74-4bd3-8388-47534a1d0148", 00:15:39.412 "is_configured": true, 00:15:39.412 "data_offset": 2048, 00:15:39.412 "data_size": 63488 00:15:39.412 } 00:15:39.412 ] 00:15:39.412 }' 00:15:39.412 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.412 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.994 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:39.994 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:39.994 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:39.994 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:39.994 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:39.994 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:39.994 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:39.994 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:39.994 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.994 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.994 [2024-11-19 10:26:53.533041] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:39.994 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.994 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:39.994 "name": "Existed_Raid", 00:15:39.994 "aliases": [ 00:15:39.994 "10759c2a-5820-4ac9-bae9-5b6a7b1d9956" 00:15:39.994 ], 00:15:39.994 "product_name": "Raid Volume", 00:15:39.994 "block_size": 512, 00:15:39.994 "num_blocks": 190464, 00:15:39.994 "uuid": "10759c2a-5820-4ac9-bae9-5b6a7b1d9956", 00:15:39.994 "assigned_rate_limits": { 00:15:39.994 "rw_ios_per_sec": 0, 00:15:39.994 "rw_mbytes_per_sec": 0, 00:15:39.994 "r_mbytes_per_sec": 0, 00:15:39.994 "w_mbytes_per_sec": 0 00:15:39.994 }, 00:15:39.994 "claimed": false, 00:15:39.994 "zoned": false, 00:15:39.994 "supported_io_types": { 00:15:39.994 "read": true, 00:15:39.994 "write": true, 00:15:39.994 "unmap": false, 00:15:39.994 "flush": false, 00:15:39.995 "reset": true, 00:15:39.995 "nvme_admin": false, 00:15:39.995 "nvme_io": false, 00:15:39.995 "nvme_io_md": false, 00:15:39.995 "write_zeroes": true, 00:15:39.995 "zcopy": false, 00:15:39.995 "get_zone_info": false, 00:15:39.995 "zone_management": false, 00:15:39.995 "zone_append": false, 00:15:39.995 "compare": false, 00:15:39.995 "compare_and_write": false, 00:15:39.995 "abort": false, 00:15:39.995 "seek_hole": false, 00:15:39.995 "seek_data": false, 00:15:39.995 "copy": false, 00:15:39.995 "nvme_iov_md": false 00:15:39.995 }, 00:15:39.995 "driver_specific": { 00:15:39.995 "raid": { 00:15:39.995 "uuid": "10759c2a-5820-4ac9-bae9-5b6a7b1d9956", 00:15:39.995 "strip_size_kb": 64, 00:15:39.995 "state": "online", 00:15:39.995 "raid_level": "raid5f", 00:15:39.995 "superblock": true, 00:15:39.995 "num_base_bdevs": 4, 00:15:39.995 "num_base_bdevs_discovered": 4, 00:15:39.995 "num_base_bdevs_operational": 4, 00:15:39.995 "base_bdevs_list": [ 00:15:39.995 { 00:15:39.995 "name": "BaseBdev1", 00:15:39.995 "uuid": "c34e5e60-2ce2-4b79-90f8-f3195062d339", 00:15:39.995 "is_configured": true, 00:15:39.995 "data_offset": 2048, 00:15:39.995 "data_size": 63488 00:15:39.995 }, 00:15:39.995 { 00:15:39.995 "name": "BaseBdev2", 00:15:39.995 "uuid": "4eefdc94-ae67-4e01-a2e7-6cad5fafe463", 00:15:39.995 "is_configured": true, 00:15:39.995 "data_offset": 2048, 00:15:39.995 "data_size": 63488 00:15:39.995 }, 00:15:39.995 { 00:15:39.995 "name": "BaseBdev3", 00:15:39.995 "uuid": "239066e9-859f-426a-938f-891fb30ce9f1", 00:15:39.995 "is_configured": true, 00:15:39.995 "data_offset": 2048, 00:15:39.995 "data_size": 63488 00:15:39.995 }, 00:15:39.995 { 00:15:39.995 "name": "BaseBdev4", 00:15:39.995 "uuid": "e0682cff-bd74-4bd3-8388-47534a1d0148", 00:15:39.995 "is_configured": true, 00:15:39.995 "data_offset": 2048, 00:15:39.995 "data_size": 63488 00:15:39.995 } 00:15:39.995 ] 00:15:39.995 } 00:15:39.995 } 00:15:39.995 }' 00:15:39.995 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:39.995 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:39.995 BaseBdev2 00:15:39.995 BaseBdev3 00:15:39.995 BaseBdev4' 00:15:39.995 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.995 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:39.995 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.995 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.995 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:39.995 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.995 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.995 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.995 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:39.995 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:39.995 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.995 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.995 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:39.995 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.995 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.995 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.995 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:39.995 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:39.995 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:39.995 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:39.995 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.995 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:39.995 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.269 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.269 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:40.269 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:40.269 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:40.269 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:40.269 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.269 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:40.269 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.269 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.269 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:40.269 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:40.269 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:40.269 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.269 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.269 [2024-11-19 10:26:53.856321] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:40.269 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.269 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:40.269 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:40.269 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:40.269 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:40.269 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:40.269 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:40.269 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.269 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.269 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.269 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.269 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.269 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.270 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.270 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.270 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.270 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.270 10:26:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.270 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.270 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.270 10:26:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.270 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.270 "name": "Existed_Raid", 00:15:40.270 "uuid": "10759c2a-5820-4ac9-bae9-5b6a7b1d9956", 00:15:40.270 "strip_size_kb": 64, 00:15:40.270 "state": "online", 00:15:40.270 "raid_level": "raid5f", 00:15:40.270 "superblock": true, 00:15:40.270 "num_base_bdevs": 4, 00:15:40.270 "num_base_bdevs_discovered": 3, 00:15:40.270 "num_base_bdevs_operational": 3, 00:15:40.270 "base_bdevs_list": [ 00:15:40.270 { 00:15:40.270 "name": null, 00:15:40.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.270 "is_configured": false, 00:15:40.270 "data_offset": 0, 00:15:40.270 "data_size": 63488 00:15:40.270 }, 00:15:40.270 { 00:15:40.270 "name": "BaseBdev2", 00:15:40.270 "uuid": "4eefdc94-ae67-4e01-a2e7-6cad5fafe463", 00:15:40.270 "is_configured": true, 00:15:40.270 "data_offset": 2048, 00:15:40.270 "data_size": 63488 00:15:40.270 }, 00:15:40.270 { 00:15:40.270 "name": "BaseBdev3", 00:15:40.270 "uuid": "239066e9-859f-426a-938f-891fb30ce9f1", 00:15:40.270 "is_configured": true, 00:15:40.270 "data_offset": 2048, 00:15:40.270 "data_size": 63488 00:15:40.270 }, 00:15:40.270 { 00:15:40.270 "name": "BaseBdev4", 00:15:40.270 "uuid": "e0682cff-bd74-4bd3-8388-47534a1d0148", 00:15:40.270 "is_configured": true, 00:15:40.270 "data_offset": 2048, 00:15:40.270 "data_size": 63488 00:15:40.270 } 00:15:40.270 ] 00:15:40.270 }' 00:15:40.270 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.270 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.839 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:40.839 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:40.839 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.839 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:40.839 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.839 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.839 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.839 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:40.839 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:40.839 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:40.839 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.840 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.840 [2024-11-19 10:26:54.471346] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:40.840 [2024-11-19 10:26:54.471543] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:40.840 [2024-11-19 10:26:54.559530] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:40.840 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.840 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:40.840 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:40.840 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.840 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:40.840 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.840 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.840 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.840 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:40.840 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:40.840 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:40.840 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.840 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.100 [2024-11-19 10:26:54.619483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:41.100 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.100 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:41.100 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:41.100 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.100 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:41.100 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.100 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.100 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.100 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:41.100 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:41.100 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:41.100 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.100 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.100 [2024-11-19 10:26:54.765847] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:41.100 [2024-11-19 10:26:54.765935] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:41.100 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.100 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:41.100 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:41.100 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.100 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:41.100 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.100 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.100 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.362 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:41.362 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:41.362 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:41.362 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:41.362 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:41.362 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:41.362 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.362 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.362 BaseBdev2 00:15:41.362 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.362 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:41.362 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:41.362 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:41.362 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:41.362 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:41.362 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:41.362 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:41.362 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.362 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.362 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.362 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:41.362 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.362 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.362 [ 00:15:41.362 { 00:15:41.362 "name": "BaseBdev2", 00:15:41.362 "aliases": [ 00:15:41.362 "138d2e28-8583-434f-ad51-31d66f9c71c3" 00:15:41.362 ], 00:15:41.362 "product_name": "Malloc disk", 00:15:41.362 "block_size": 512, 00:15:41.362 "num_blocks": 65536, 00:15:41.362 "uuid": "138d2e28-8583-434f-ad51-31d66f9c71c3", 00:15:41.362 "assigned_rate_limits": { 00:15:41.362 "rw_ios_per_sec": 0, 00:15:41.362 "rw_mbytes_per_sec": 0, 00:15:41.362 "r_mbytes_per_sec": 0, 00:15:41.362 "w_mbytes_per_sec": 0 00:15:41.362 }, 00:15:41.362 "claimed": false, 00:15:41.362 "zoned": false, 00:15:41.362 "supported_io_types": { 00:15:41.362 "read": true, 00:15:41.362 "write": true, 00:15:41.362 "unmap": true, 00:15:41.362 "flush": true, 00:15:41.362 "reset": true, 00:15:41.362 "nvme_admin": false, 00:15:41.362 "nvme_io": false, 00:15:41.362 "nvme_io_md": false, 00:15:41.362 "write_zeroes": true, 00:15:41.362 "zcopy": true, 00:15:41.362 "get_zone_info": false, 00:15:41.362 "zone_management": false, 00:15:41.362 "zone_append": false, 00:15:41.362 "compare": false, 00:15:41.362 "compare_and_write": false, 00:15:41.362 "abort": true, 00:15:41.362 "seek_hole": false, 00:15:41.362 "seek_data": false, 00:15:41.362 "copy": true, 00:15:41.362 "nvme_iov_md": false 00:15:41.362 }, 00:15:41.362 "memory_domains": [ 00:15:41.362 { 00:15:41.362 "dma_device_id": "system", 00:15:41.362 "dma_device_type": 1 00:15:41.362 }, 00:15:41.362 { 00:15:41.362 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.362 "dma_device_type": 2 00:15:41.362 } 00:15:41.362 ], 00:15:41.362 "driver_specific": {} 00:15:41.362 } 00:15:41.362 ] 00:15:41.362 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.362 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:41.362 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:41.362 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:41.362 10:26:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:41.362 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.362 10:26:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.362 BaseBdev3 00:15:41.362 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.362 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:41.362 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:41.362 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:41.362 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:41.362 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:41.362 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:41.362 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:41.362 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.362 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.362 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.362 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:41.362 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.362 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.362 [ 00:15:41.362 { 00:15:41.362 "name": "BaseBdev3", 00:15:41.362 "aliases": [ 00:15:41.362 "52b29925-c840-4257-8d87-ee4deb9138b6" 00:15:41.362 ], 00:15:41.362 "product_name": "Malloc disk", 00:15:41.362 "block_size": 512, 00:15:41.362 "num_blocks": 65536, 00:15:41.362 "uuid": "52b29925-c840-4257-8d87-ee4deb9138b6", 00:15:41.362 "assigned_rate_limits": { 00:15:41.362 "rw_ios_per_sec": 0, 00:15:41.362 "rw_mbytes_per_sec": 0, 00:15:41.362 "r_mbytes_per_sec": 0, 00:15:41.362 "w_mbytes_per_sec": 0 00:15:41.362 }, 00:15:41.362 "claimed": false, 00:15:41.362 "zoned": false, 00:15:41.362 "supported_io_types": { 00:15:41.362 "read": true, 00:15:41.362 "write": true, 00:15:41.362 "unmap": true, 00:15:41.362 "flush": true, 00:15:41.362 "reset": true, 00:15:41.362 "nvme_admin": false, 00:15:41.362 "nvme_io": false, 00:15:41.362 "nvme_io_md": false, 00:15:41.362 "write_zeroes": true, 00:15:41.362 "zcopy": true, 00:15:41.362 "get_zone_info": false, 00:15:41.362 "zone_management": false, 00:15:41.362 "zone_append": false, 00:15:41.362 "compare": false, 00:15:41.362 "compare_and_write": false, 00:15:41.362 "abort": true, 00:15:41.362 "seek_hole": false, 00:15:41.362 "seek_data": false, 00:15:41.362 "copy": true, 00:15:41.362 "nvme_iov_md": false 00:15:41.362 }, 00:15:41.362 "memory_domains": [ 00:15:41.362 { 00:15:41.362 "dma_device_id": "system", 00:15:41.362 "dma_device_type": 1 00:15:41.362 }, 00:15:41.362 { 00:15:41.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.363 "dma_device_type": 2 00:15:41.363 } 00:15:41.363 ], 00:15:41.363 "driver_specific": {} 00:15:41.363 } 00:15:41.363 ] 00:15:41.363 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.363 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:41.363 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:41.363 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:41.363 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:41.363 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.363 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.363 BaseBdev4 00:15:41.363 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.363 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:41.363 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:41.363 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:41.363 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:41.363 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:41.363 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:41.363 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:41.363 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.363 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.363 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.363 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:41.363 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.363 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.363 [ 00:15:41.363 { 00:15:41.363 "name": "BaseBdev4", 00:15:41.363 "aliases": [ 00:15:41.363 "e845ff3d-1126-497a-820a-8dfe504d859c" 00:15:41.363 ], 00:15:41.363 "product_name": "Malloc disk", 00:15:41.363 "block_size": 512, 00:15:41.363 "num_blocks": 65536, 00:15:41.363 "uuid": "e845ff3d-1126-497a-820a-8dfe504d859c", 00:15:41.363 "assigned_rate_limits": { 00:15:41.363 "rw_ios_per_sec": 0, 00:15:41.363 "rw_mbytes_per_sec": 0, 00:15:41.363 "r_mbytes_per_sec": 0, 00:15:41.363 "w_mbytes_per_sec": 0 00:15:41.363 }, 00:15:41.363 "claimed": false, 00:15:41.363 "zoned": false, 00:15:41.363 "supported_io_types": { 00:15:41.363 "read": true, 00:15:41.363 "write": true, 00:15:41.363 "unmap": true, 00:15:41.363 "flush": true, 00:15:41.363 "reset": true, 00:15:41.363 "nvme_admin": false, 00:15:41.363 "nvme_io": false, 00:15:41.363 "nvme_io_md": false, 00:15:41.363 "write_zeroes": true, 00:15:41.363 "zcopy": true, 00:15:41.363 "get_zone_info": false, 00:15:41.363 "zone_management": false, 00:15:41.363 "zone_append": false, 00:15:41.363 "compare": false, 00:15:41.363 "compare_and_write": false, 00:15:41.363 "abort": true, 00:15:41.363 "seek_hole": false, 00:15:41.363 "seek_data": false, 00:15:41.363 "copy": true, 00:15:41.363 "nvme_iov_md": false 00:15:41.363 }, 00:15:41.363 "memory_domains": [ 00:15:41.363 { 00:15:41.363 "dma_device_id": "system", 00:15:41.363 "dma_device_type": 1 00:15:41.363 }, 00:15:41.363 { 00:15:41.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.363 "dma_device_type": 2 00:15:41.363 } 00:15:41.363 ], 00:15:41.363 "driver_specific": {} 00:15:41.363 } 00:15:41.363 ] 00:15:41.363 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.363 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:41.363 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:41.363 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:41.363 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:41.363 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.363 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.363 [2024-11-19 10:26:55.137219] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:41.363 [2024-11-19 10:26:55.137313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:41.363 [2024-11-19 10:26:55.137355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:41.363 [2024-11-19 10:26:55.139151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:41.363 [2024-11-19 10:26:55.139242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:41.624 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.624 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:41.624 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.624 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.624 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.624 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.624 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.624 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.624 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.624 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.624 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.624 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.624 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.624 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.624 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.624 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.624 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.624 "name": "Existed_Raid", 00:15:41.624 "uuid": "4bd1ced1-e33b-4ae3-b8a4-a2f5922328ad", 00:15:41.624 "strip_size_kb": 64, 00:15:41.624 "state": "configuring", 00:15:41.624 "raid_level": "raid5f", 00:15:41.624 "superblock": true, 00:15:41.624 "num_base_bdevs": 4, 00:15:41.624 "num_base_bdevs_discovered": 3, 00:15:41.624 "num_base_bdevs_operational": 4, 00:15:41.624 "base_bdevs_list": [ 00:15:41.624 { 00:15:41.624 "name": "BaseBdev1", 00:15:41.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.624 "is_configured": false, 00:15:41.624 "data_offset": 0, 00:15:41.624 "data_size": 0 00:15:41.624 }, 00:15:41.624 { 00:15:41.624 "name": "BaseBdev2", 00:15:41.624 "uuid": "138d2e28-8583-434f-ad51-31d66f9c71c3", 00:15:41.624 "is_configured": true, 00:15:41.624 "data_offset": 2048, 00:15:41.624 "data_size": 63488 00:15:41.624 }, 00:15:41.624 { 00:15:41.624 "name": "BaseBdev3", 00:15:41.624 "uuid": "52b29925-c840-4257-8d87-ee4deb9138b6", 00:15:41.624 "is_configured": true, 00:15:41.624 "data_offset": 2048, 00:15:41.624 "data_size": 63488 00:15:41.624 }, 00:15:41.624 { 00:15:41.624 "name": "BaseBdev4", 00:15:41.624 "uuid": "e845ff3d-1126-497a-820a-8dfe504d859c", 00:15:41.624 "is_configured": true, 00:15:41.624 "data_offset": 2048, 00:15:41.624 "data_size": 63488 00:15:41.624 } 00:15:41.624 ] 00:15:41.624 }' 00:15:41.624 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.624 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.884 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:41.884 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.884 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.884 [2024-11-19 10:26:55.576443] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:41.884 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.884 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:41.884 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.884 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.884 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.884 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.884 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:41.884 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.884 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.884 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.884 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.884 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.884 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.884 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.884 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.884 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.884 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.884 "name": "Existed_Raid", 00:15:41.884 "uuid": "4bd1ced1-e33b-4ae3-b8a4-a2f5922328ad", 00:15:41.884 "strip_size_kb": 64, 00:15:41.884 "state": "configuring", 00:15:41.884 "raid_level": "raid5f", 00:15:41.884 "superblock": true, 00:15:41.884 "num_base_bdevs": 4, 00:15:41.884 "num_base_bdevs_discovered": 2, 00:15:41.884 "num_base_bdevs_operational": 4, 00:15:41.884 "base_bdevs_list": [ 00:15:41.884 { 00:15:41.884 "name": "BaseBdev1", 00:15:41.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.884 "is_configured": false, 00:15:41.884 "data_offset": 0, 00:15:41.884 "data_size": 0 00:15:41.884 }, 00:15:41.884 { 00:15:41.884 "name": null, 00:15:41.884 "uuid": "138d2e28-8583-434f-ad51-31d66f9c71c3", 00:15:41.884 "is_configured": false, 00:15:41.884 "data_offset": 0, 00:15:41.884 "data_size": 63488 00:15:41.884 }, 00:15:41.884 { 00:15:41.884 "name": "BaseBdev3", 00:15:41.884 "uuid": "52b29925-c840-4257-8d87-ee4deb9138b6", 00:15:41.884 "is_configured": true, 00:15:41.884 "data_offset": 2048, 00:15:41.884 "data_size": 63488 00:15:41.884 }, 00:15:41.884 { 00:15:41.884 "name": "BaseBdev4", 00:15:41.884 "uuid": "e845ff3d-1126-497a-820a-8dfe504d859c", 00:15:41.884 "is_configured": true, 00:15:41.884 "data_offset": 2048, 00:15:41.884 "data_size": 63488 00:15:41.884 } 00:15:41.884 ] 00:15:41.884 }' 00:15:41.884 10:26:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.884 10:26:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.455 [2024-11-19 10:26:56.117746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:42.455 BaseBdev1 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.455 [ 00:15:42.455 { 00:15:42.455 "name": "BaseBdev1", 00:15:42.455 "aliases": [ 00:15:42.455 "6212958d-64ec-4283-aefc-7eb372928c22" 00:15:42.455 ], 00:15:42.455 "product_name": "Malloc disk", 00:15:42.455 "block_size": 512, 00:15:42.455 "num_blocks": 65536, 00:15:42.455 "uuid": "6212958d-64ec-4283-aefc-7eb372928c22", 00:15:42.455 "assigned_rate_limits": { 00:15:42.455 "rw_ios_per_sec": 0, 00:15:42.455 "rw_mbytes_per_sec": 0, 00:15:42.455 "r_mbytes_per_sec": 0, 00:15:42.455 "w_mbytes_per_sec": 0 00:15:42.455 }, 00:15:42.455 "claimed": true, 00:15:42.455 "claim_type": "exclusive_write", 00:15:42.455 "zoned": false, 00:15:42.455 "supported_io_types": { 00:15:42.455 "read": true, 00:15:42.455 "write": true, 00:15:42.455 "unmap": true, 00:15:42.455 "flush": true, 00:15:42.455 "reset": true, 00:15:42.455 "nvme_admin": false, 00:15:42.455 "nvme_io": false, 00:15:42.455 "nvme_io_md": false, 00:15:42.455 "write_zeroes": true, 00:15:42.455 "zcopy": true, 00:15:42.455 "get_zone_info": false, 00:15:42.455 "zone_management": false, 00:15:42.455 "zone_append": false, 00:15:42.455 "compare": false, 00:15:42.455 "compare_and_write": false, 00:15:42.455 "abort": true, 00:15:42.455 "seek_hole": false, 00:15:42.455 "seek_data": false, 00:15:42.455 "copy": true, 00:15:42.455 "nvme_iov_md": false 00:15:42.455 }, 00:15:42.455 "memory_domains": [ 00:15:42.455 { 00:15:42.455 "dma_device_id": "system", 00:15:42.455 "dma_device_type": 1 00:15:42.455 }, 00:15:42.455 { 00:15:42.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:42.455 "dma_device_type": 2 00:15:42.455 } 00:15:42.455 ], 00:15:42.455 "driver_specific": {} 00:15:42.455 } 00:15:42.455 ] 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.455 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.455 "name": "Existed_Raid", 00:15:42.455 "uuid": "4bd1ced1-e33b-4ae3-b8a4-a2f5922328ad", 00:15:42.455 "strip_size_kb": 64, 00:15:42.455 "state": "configuring", 00:15:42.455 "raid_level": "raid5f", 00:15:42.455 "superblock": true, 00:15:42.455 "num_base_bdevs": 4, 00:15:42.455 "num_base_bdevs_discovered": 3, 00:15:42.455 "num_base_bdevs_operational": 4, 00:15:42.455 "base_bdevs_list": [ 00:15:42.455 { 00:15:42.455 "name": "BaseBdev1", 00:15:42.455 "uuid": "6212958d-64ec-4283-aefc-7eb372928c22", 00:15:42.455 "is_configured": true, 00:15:42.455 "data_offset": 2048, 00:15:42.455 "data_size": 63488 00:15:42.455 }, 00:15:42.455 { 00:15:42.455 "name": null, 00:15:42.455 "uuid": "138d2e28-8583-434f-ad51-31d66f9c71c3", 00:15:42.455 "is_configured": false, 00:15:42.455 "data_offset": 0, 00:15:42.455 "data_size": 63488 00:15:42.455 }, 00:15:42.455 { 00:15:42.455 "name": "BaseBdev3", 00:15:42.455 "uuid": "52b29925-c840-4257-8d87-ee4deb9138b6", 00:15:42.456 "is_configured": true, 00:15:42.456 "data_offset": 2048, 00:15:42.456 "data_size": 63488 00:15:42.456 }, 00:15:42.456 { 00:15:42.456 "name": "BaseBdev4", 00:15:42.456 "uuid": "e845ff3d-1126-497a-820a-8dfe504d859c", 00:15:42.456 "is_configured": true, 00:15:42.456 "data_offset": 2048, 00:15:42.456 "data_size": 63488 00:15:42.456 } 00:15:42.456 ] 00:15:42.456 }' 00:15:42.456 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.456 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.025 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:43.025 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.025 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.025 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.025 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.025 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:43.025 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:43.025 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.026 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.026 [2024-11-19 10:26:56.668858] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:43.026 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.026 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:43.026 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.026 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.026 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.026 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.026 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:43.026 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.026 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.026 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.026 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.026 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.026 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.026 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.026 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.026 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.026 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.026 "name": "Existed_Raid", 00:15:43.026 "uuid": "4bd1ced1-e33b-4ae3-b8a4-a2f5922328ad", 00:15:43.026 "strip_size_kb": 64, 00:15:43.026 "state": "configuring", 00:15:43.026 "raid_level": "raid5f", 00:15:43.026 "superblock": true, 00:15:43.026 "num_base_bdevs": 4, 00:15:43.026 "num_base_bdevs_discovered": 2, 00:15:43.026 "num_base_bdevs_operational": 4, 00:15:43.026 "base_bdevs_list": [ 00:15:43.026 { 00:15:43.026 "name": "BaseBdev1", 00:15:43.026 "uuid": "6212958d-64ec-4283-aefc-7eb372928c22", 00:15:43.026 "is_configured": true, 00:15:43.026 "data_offset": 2048, 00:15:43.026 "data_size": 63488 00:15:43.026 }, 00:15:43.026 { 00:15:43.026 "name": null, 00:15:43.026 "uuid": "138d2e28-8583-434f-ad51-31d66f9c71c3", 00:15:43.026 "is_configured": false, 00:15:43.026 "data_offset": 0, 00:15:43.026 "data_size": 63488 00:15:43.026 }, 00:15:43.026 { 00:15:43.026 "name": null, 00:15:43.026 "uuid": "52b29925-c840-4257-8d87-ee4deb9138b6", 00:15:43.026 "is_configured": false, 00:15:43.026 "data_offset": 0, 00:15:43.026 "data_size": 63488 00:15:43.026 }, 00:15:43.026 { 00:15:43.026 "name": "BaseBdev4", 00:15:43.026 "uuid": "e845ff3d-1126-497a-820a-8dfe504d859c", 00:15:43.026 "is_configured": true, 00:15:43.026 "data_offset": 2048, 00:15:43.026 "data_size": 63488 00:15:43.026 } 00:15:43.026 ] 00:15:43.026 }' 00:15:43.026 10:26:56 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.026 10:26:56 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.595 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:43.595 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.595 10:26:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.595 10:26:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.595 10:26:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.595 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:43.595 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:43.595 10:26:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.595 10:26:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.595 [2024-11-19 10:26:57.144050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:43.595 10:26:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.595 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:43.595 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.595 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.595 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.595 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.595 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:43.595 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.595 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.595 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.595 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.595 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.595 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.595 10:26:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.595 10:26:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.595 10:26:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.595 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.595 "name": "Existed_Raid", 00:15:43.595 "uuid": "4bd1ced1-e33b-4ae3-b8a4-a2f5922328ad", 00:15:43.595 "strip_size_kb": 64, 00:15:43.595 "state": "configuring", 00:15:43.595 "raid_level": "raid5f", 00:15:43.595 "superblock": true, 00:15:43.595 "num_base_bdevs": 4, 00:15:43.595 "num_base_bdevs_discovered": 3, 00:15:43.595 "num_base_bdevs_operational": 4, 00:15:43.595 "base_bdevs_list": [ 00:15:43.595 { 00:15:43.595 "name": "BaseBdev1", 00:15:43.595 "uuid": "6212958d-64ec-4283-aefc-7eb372928c22", 00:15:43.595 "is_configured": true, 00:15:43.595 "data_offset": 2048, 00:15:43.595 "data_size": 63488 00:15:43.595 }, 00:15:43.595 { 00:15:43.595 "name": null, 00:15:43.595 "uuid": "138d2e28-8583-434f-ad51-31d66f9c71c3", 00:15:43.595 "is_configured": false, 00:15:43.595 "data_offset": 0, 00:15:43.595 "data_size": 63488 00:15:43.595 }, 00:15:43.595 { 00:15:43.595 "name": "BaseBdev3", 00:15:43.595 "uuid": "52b29925-c840-4257-8d87-ee4deb9138b6", 00:15:43.595 "is_configured": true, 00:15:43.595 "data_offset": 2048, 00:15:43.595 "data_size": 63488 00:15:43.595 }, 00:15:43.595 { 00:15:43.595 "name": "BaseBdev4", 00:15:43.595 "uuid": "e845ff3d-1126-497a-820a-8dfe504d859c", 00:15:43.595 "is_configured": true, 00:15:43.595 "data_offset": 2048, 00:15:43.595 "data_size": 63488 00:15:43.595 } 00:15:43.595 ] 00:15:43.595 }' 00:15:43.595 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.595 10:26:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.855 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.855 10:26:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.855 10:26:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.856 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:43.856 10:26:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.856 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:43.856 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:43.856 10:26:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.856 10:26:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.856 [2024-11-19 10:26:57.603297] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:44.116 10:26:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.116 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:44.116 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.116 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.116 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.116 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.116 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:44.116 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.116 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.116 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.116 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.116 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.116 10:26:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.116 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.116 10:26:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.116 10:26:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.116 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.116 "name": "Existed_Raid", 00:15:44.116 "uuid": "4bd1ced1-e33b-4ae3-b8a4-a2f5922328ad", 00:15:44.116 "strip_size_kb": 64, 00:15:44.116 "state": "configuring", 00:15:44.116 "raid_level": "raid5f", 00:15:44.116 "superblock": true, 00:15:44.116 "num_base_bdevs": 4, 00:15:44.116 "num_base_bdevs_discovered": 2, 00:15:44.116 "num_base_bdevs_operational": 4, 00:15:44.116 "base_bdevs_list": [ 00:15:44.116 { 00:15:44.116 "name": null, 00:15:44.116 "uuid": "6212958d-64ec-4283-aefc-7eb372928c22", 00:15:44.116 "is_configured": false, 00:15:44.116 "data_offset": 0, 00:15:44.116 "data_size": 63488 00:15:44.116 }, 00:15:44.116 { 00:15:44.116 "name": null, 00:15:44.116 "uuid": "138d2e28-8583-434f-ad51-31d66f9c71c3", 00:15:44.116 "is_configured": false, 00:15:44.116 "data_offset": 0, 00:15:44.116 "data_size": 63488 00:15:44.116 }, 00:15:44.116 { 00:15:44.116 "name": "BaseBdev3", 00:15:44.116 "uuid": "52b29925-c840-4257-8d87-ee4deb9138b6", 00:15:44.116 "is_configured": true, 00:15:44.116 "data_offset": 2048, 00:15:44.116 "data_size": 63488 00:15:44.116 }, 00:15:44.116 { 00:15:44.116 "name": "BaseBdev4", 00:15:44.116 "uuid": "e845ff3d-1126-497a-820a-8dfe504d859c", 00:15:44.116 "is_configured": true, 00:15:44.116 "data_offset": 2048, 00:15:44.116 "data_size": 63488 00:15:44.116 } 00:15:44.116 ] 00:15:44.116 }' 00:15:44.116 10:26:57 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.116 10:26:57 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.376 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.376 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:44.376 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.376 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.376 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.376 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:44.376 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:44.376 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.376 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.376 [2024-11-19 10:26:58.151459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:44.636 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.636 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:44.636 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.636 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.636 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.636 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.636 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:44.636 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.637 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.637 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.637 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.637 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.637 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.637 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.637 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.637 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.637 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.637 "name": "Existed_Raid", 00:15:44.637 "uuid": "4bd1ced1-e33b-4ae3-b8a4-a2f5922328ad", 00:15:44.637 "strip_size_kb": 64, 00:15:44.637 "state": "configuring", 00:15:44.637 "raid_level": "raid5f", 00:15:44.637 "superblock": true, 00:15:44.637 "num_base_bdevs": 4, 00:15:44.637 "num_base_bdevs_discovered": 3, 00:15:44.637 "num_base_bdevs_operational": 4, 00:15:44.637 "base_bdevs_list": [ 00:15:44.637 { 00:15:44.637 "name": null, 00:15:44.637 "uuid": "6212958d-64ec-4283-aefc-7eb372928c22", 00:15:44.637 "is_configured": false, 00:15:44.637 "data_offset": 0, 00:15:44.637 "data_size": 63488 00:15:44.637 }, 00:15:44.637 { 00:15:44.637 "name": "BaseBdev2", 00:15:44.637 "uuid": "138d2e28-8583-434f-ad51-31d66f9c71c3", 00:15:44.637 "is_configured": true, 00:15:44.637 "data_offset": 2048, 00:15:44.637 "data_size": 63488 00:15:44.637 }, 00:15:44.637 { 00:15:44.637 "name": "BaseBdev3", 00:15:44.637 "uuid": "52b29925-c840-4257-8d87-ee4deb9138b6", 00:15:44.637 "is_configured": true, 00:15:44.637 "data_offset": 2048, 00:15:44.637 "data_size": 63488 00:15:44.637 }, 00:15:44.637 { 00:15:44.637 "name": "BaseBdev4", 00:15:44.637 "uuid": "e845ff3d-1126-497a-820a-8dfe504d859c", 00:15:44.637 "is_configured": true, 00:15:44.637 "data_offset": 2048, 00:15:44.637 "data_size": 63488 00:15:44.637 } 00:15:44.637 ] 00:15:44.637 }' 00:15:44.637 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.637 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.897 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.897 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.897 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.897 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:44.897 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.897 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:44.897 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:44.897 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.897 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.897 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.897 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.897 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6212958d-64ec-4283-aefc-7eb372928c22 00:15:44.897 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.897 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.158 [2024-11-19 10:26:58.701792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:45.158 [2024-11-19 10:26:58.702126] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:45.158 [2024-11-19 10:26:58.702181] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:45.158 [2024-11-19 10:26:58.702440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:45.158 NewBaseBdev 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.158 [2024-11-19 10:26:58.709477] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:45.158 [2024-11-19 10:26:58.709565] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:45.158 [2024-11-19 10:26:58.709766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.158 [ 00:15:45.158 { 00:15:45.158 "name": "NewBaseBdev", 00:15:45.158 "aliases": [ 00:15:45.158 "6212958d-64ec-4283-aefc-7eb372928c22" 00:15:45.158 ], 00:15:45.158 "product_name": "Malloc disk", 00:15:45.158 "block_size": 512, 00:15:45.158 "num_blocks": 65536, 00:15:45.158 "uuid": "6212958d-64ec-4283-aefc-7eb372928c22", 00:15:45.158 "assigned_rate_limits": { 00:15:45.158 "rw_ios_per_sec": 0, 00:15:45.158 "rw_mbytes_per_sec": 0, 00:15:45.158 "r_mbytes_per_sec": 0, 00:15:45.158 "w_mbytes_per_sec": 0 00:15:45.158 }, 00:15:45.158 "claimed": true, 00:15:45.158 "claim_type": "exclusive_write", 00:15:45.158 "zoned": false, 00:15:45.158 "supported_io_types": { 00:15:45.158 "read": true, 00:15:45.158 "write": true, 00:15:45.158 "unmap": true, 00:15:45.158 "flush": true, 00:15:45.158 "reset": true, 00:15:45.158 "nvme_admin": false, 00:15:45.158 "nvme_io": false, 00:15:45.158 "nvme_io_md": false, 00:15:45.158 "write_zeroes": true, 00:15:45.158 "zcopy": true, 00:15:45.158 "get_zone_info": false, 00:15:45.158 "zone_management": false, 00:15:45.158 "zone_append": false, 00:15:45.158 "compare": false, 00:15:45.158 "compare_and_write": false, 00:15:45.158 "abort": true, 00:15:45.158 "seek_hole": false, 00:15:45.158 "seek_data": false, 00:15:45.158 "copy": true, 00:15:45.158 "nvme_iov_md": false 00:15:45.158 }, 00:15:45.158 "memory_domains": [ 00:15:45.158 { 00:15:45.158 "dma_device_id": "system", 00:15:45.158 "dma_device_type": 1 00:15:45.158 }, 00:15:45.158 { 00:15:45.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.158 "dma_device_type": 2 00:15:45.158 } 00:15:45.158 ], 00:15:45.158 "driver_specific": {} 00:15:45.158 } 00:15:45.158 ] 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.158 "name": "Existed_Raid", 00:15:45.158 "uuid": "4bd1ced1-e33b-4ae3-b8a4-a2f5922328ad", 00:15:45.158 "strip_size_kb": 64, 00:15:45.158 "state": "online", 00:15:45.158 "raid_level": "raid5f", 00:15:45.158 "superblock": true, 00:15:45.158 "num_base_bdevs": 4, 00:15:45.158 "num_base_bdevs_discovered": 4, 00:15:45.158 "num_base_bdevs_operational": 4, 00:15:45.158 "base_bdevs_list": [ 00:15:45.158 { 00:15:45.158 "name": "NewBaseBdev", 00:15:45.158 "uuid": "6212958d-64ec-4283-aefc-7eb372928c22", 00:15:45.158 "is_configured": true, 00:15:45.158 "data_offset": 2048, 00:15:45.158 "data_size": 63488 00:15:45.158 }, 00:15:45.158 { 00:15:45.158 "name": "BaseBdev2", 00:15:45.158 "uuid": "138d2e28-8583-434f-ad51-31d66f9c71c3", 00:15:45.158 "is_configured": true, 00:15:45.158 "data_offset": 2048, 00:15:45.158 "data_size": 63488 00:15:45.158 }, 00:15:45.158 { 00:15:45.158 "name": "BaseBdev3", 00:15:45.158 "uuid": "52b29925-c840-4257-8d87-ee4deb9138b6", 00:15:45.158 "is_configured": true, 00:15:45.158 "data_offset": 2048, 00:15:45.158 "data_size": 63488 00:15:45.158 }, 00:15:45.158 { 00:15:45.158 "name": "BaseBdev4", 00:15:45.158 "uuid": "e845ff3d-1126-497a-820a-8dfe504d859c", 00:15:45.158 "is_configured": true, 00:15:45.158 "data_offset": 2048, 00:15:45.158 "data_size": 63488 00:15:45.158 } 00:15:45.158 ] 00:15:45.158 }' 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.158 10:26:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.418 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:45.418 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:45.418 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:45.418 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:45.418 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:45.418 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:45.418 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:45.418 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:45.418 10:26:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.418 10:26:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.418 [2024-11-19 10:26:59.165289] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:45.418 10:26:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.418 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:45.418 "name": "Existed_Raid", 00:15:45.418 "aliases": [ 00:15:45.418 "4bd1ced1-e33b-4ae3-b8a4-a2f5922328ad" 00:15:45.418 ], 00:15:45.418 "product_name": "Raid Volume", 00:15:45.418 "block_size": 512, 00:15:45.418 "num_blocks": 190464, 00:15:45.418 "uuid": "4bd1ced1-e33b-4ae3-b8a4-a2f5922328ad", 00:15:45.418 "assigned_rate_limits": { 00:15:45.418 "rw_ios_per_sec": 0, 00:15:45.418 "rw_mbytes_per_sec": 0, 00:15:45.418 "r_mbytes_per_sec": 0, 00:15:45.418 "w_mbytes_per_sec": 0 00:15:45.418 }, 00:15:45.418 "claimed": false, 00:15:45.418 "zoned": false, 00:15:45.418 "supported_io_types": { 00:15:45.418 "read": true, 00:15:45.418 "write": true, 00:15:45.418 "unmap": false, 00:15:45.418 "flush": false, 00:15:45.418 "reset": true, 00:15:45.418 "nvme_admin": false, 00:15:45.418 "nvme_io": false, 00:15:45.418 "nvme_io_md": false, 00:15:45.418 "write_zeroes": true, 00:15:45.418 "zcopy": false, 00:15:45.418 "get_zone_info": false, 00:15:45.418 "zone_management": false, 00:15:45.418 "zone_append": false, 00:15:45.418 "compare": false, 00:15:45.418 "compare_and_write": false, 00:15:45.418 "abort": false, 00:15:45.418 "seek_hole": false, 00:15:45.418 "seek_data": false, 00:15:45.418 "copy": false, 00:15:45.419 "nvme_iov_md": false 00:15:45.419 }, 00:15:45.419 "driver_specific": { 00:15:45.419 "raid": { 00:15:45.419 "uuid": "4bd1ced1-e33b-4ae3-b8a4-a2f5922328ad", 00:15:45.419 "strip_size_kb": 64, 00:15:45.419 "state": "online", 00:15:45.419 "raid_level": "raid5f", 00:15:45.419 "superblock": true, 00:15:45.419 "num_base_bdevs": 4, 00:15:45.419 "num_base_bdevs_discovered": 4, 00:15:45.419 "num_base_bdevs_operational": 4, 00:15:45.419 "base_bdevs_list": [ 00:15:45.419 { 00:15:45.419 "name": "NewBaseBdev", 00:15:45.419 "uuid": "6212958d-64ec-4283-aefc-7eb372928c22", 00:15:45.419 "is_configured": true, 00:15:45.419 "data_offset": 2048, 00:15:45.419 "data_size": 63488 00:15:45.419 }, 00:15:45.419 { 00:15:45.419 "name": "BaseBdev2", 00:15:45.419 "uuid": "138d2e28-8583-434f-ad51-31d66f9c71c3", 00:15:45.419 "is_configured": true, 00:15:45.419 "data_offset": 2048, 00:15:45.419 "data_size": 63488 00:15:45.419 }, 00:15:45.419 { 00:15:45.419 "name": "BaseBdev3", 00:15:45.419 "uuid": "52b29925-c840-4257-8d87-ee4deb9138b6", 00:15:45.419 "is_configured": true, 00:15:45.419 "data_offset": 2048, 00:15:45.419 "data_size": 63488 00:15:45.419 }, 00:15:45.419 { 00:15:45.419 "name": "BaseBdev4", 00:15:45.419 "uuid": "e845ff3d-1126-497a-820a-8dfe504d859c", 00:15:45.419 "is_configured": true, 00:15:45.419 "data_offset": 2048, 00:15:45.419 "data_size": 63488 00:15:45.419 } 00:15:45.419 ] 00:15:45.419 } 00:15:45.419 } 00:15:45.419 }' 00:15:45.419 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:45.677 BaseBdev2 00:15:45.677 BaseBdev3 00:15:45.677 BaseBdev4' 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.677 10:26:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.937 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:45.937 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:45.937 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:45.937 10:26:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.937 10:26:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.937 [2024-11-19 10:26:59.472623] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:45.937 [2024-11-19 10:26:59.472648] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:45.937 [2024-11-19 10:26:59.472708] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:45.937 [2024-11-19 10:26:59.472981] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:45.937 [2024-11-19 10:26:59.473005] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:45.937 10:26:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.937 10:26:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83118 00:15:45.937 10:26:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83118 ']' 00:15:45.937 10:26:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83118 00:15:45.937 10:26:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:45.937 10:26:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:45.937 10:26:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83118 00:15:45.937 killing process with pid 83118 00:15:45.937 10:26:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:45.937 10:26:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:45.937 10:26:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83118' 00:15:45.937 10:26:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83118 00:15:45.937 [2024-11-19 10:26:59.520223] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:45.937 10:26:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83118 00:15:46.197 [2024-11-19 10:26:59.887015] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:47.138 10:27:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:47.138 00:15:47.138 real 0m11.359s 00:15:47.138 user 0m18.129s 00:15:47.138 sys 0m2.162s 00:15:47.138 10:27:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:47.138 ************************************ 00:15:47.138 END TEST raid5f_state_function_test_sb 00:15:47.138 ************************************ 00:15:47.138 10:27:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.397 10:27:00 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:15:47.397 10:27:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:47.397 10:27:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.397 10:27:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:47.397 ************************************ 00:15:47.397 START TEST raid5f_superblock_test 00:15:47.397 ************************************ 00:15:47.397 10:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:15:47.397 10:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:47.397 10:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:47.397 10:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:47.397 10:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:47.397 10:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:47.398 10:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:47.398 10:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:47.398 10:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:47.398 10:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:47.398 10:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:47.398 10:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:47.398 10:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:47.398 10:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:47.398 10:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:47.398 10:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:47.398 10:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:47.398 10:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83786 00:15:47.398 10:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:47.398 10:27:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83786 00:15:47.398 10:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 83786 ']' 00:15:47.398 10:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.398 10:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:47.398 10:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.398 10:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:47.398 10:27:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.398 [2024-11-19 10:27:01.071847] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:15:47.398 [2024-11-19 10:27:01.071949] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83786 ] 00:15:47.658 [2024-11-19 10:27:01.245686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.658 [2024-11-19 10:27:01.352088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.918 [2024-11-19 10:27:01.541652] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:47.918 [2024-11-19 10:27:01.541679] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:48.178 10:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:48.178 10:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:48.178 10:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:48.178 10:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:48.178 10:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:48.178 10:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:48.178 10:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:48.178 10:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:48.178 10:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:48.178 10:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:48.178 10:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:48.178 10:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.178 10:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.178 malloc1 00:15:48.178 10:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.178 10:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:48.178 10:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.178 10:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.178 [2024-11-19 10:27:01.953739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:48.178 [2024-11-19 10:27:01.953873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.178 [2024-11-19 10:27:01.953913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:48.178 [2024-11-19 10:27:01.953944] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.178 [2024-11-19 10:27:01.956036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.178 [2024-11-19 10:27:01.956106] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:48.439 pt1 00:15:48.439 10:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.439 10:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:48.439 10:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:48.439 10:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:48.439 10:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:48.439 10:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:48.439 10:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:48.439 10:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:48.439 10:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:48.439 10:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:48.439 10:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.439 10:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.439 malloc2 00:15:48.439 10:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.439 10:27:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:48.439 10:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.439 10:27:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.439 [2024-11-19 10:27:02.006302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:48.439 [2024-11-19 10:27:02.006420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.439 [2024-11-19 10:27:02.006455] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:48.439 [2024-11-19 10:27:02.006483] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.439 [2024-11-19 10:27:02.008550] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.439 [2024-11-19 10:27:02.008620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:48.439 pt2 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.439 malloc3 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.439 [2024-11-19 10:27:02.088839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:48.439 [2024-11-19 10:27:02.088950] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.439 [2024-11-19 10:27:02.088985] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:48.439 [2024-11-19 10:27:02.089026] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.439 [2024-11-19 10:27:02.091066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.439 [2024-11-19 10:27:02.091132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:48.439 pt3 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.439 malloc4 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.439 [2024-11-19 10:27:02.146619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:48.439 [2024-11-19 10:27:02.146707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.439 [2024-11-19 10:27:02.146744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:48.439 [2024-11-19 10:27:02.146772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.439 [2024-11-19 10:27:02.148726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.439 [2024-11-19 10:27:02.148796] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:48.439 pt4 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.439 [2024-11-19 10:27:02.158638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:48.439 [2024-11-19 10:27:02.160380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:48.439 [2024-11-19 10:27:02.160498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:48.439 [2024-11-19 10:27:02.160594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:48.439 [2024-11-19 10:27:02.160827] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:48.439 [2024-11-19 10:27:02.160878] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:48.439 [2024-11-19 10:27:02.161160] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:48.439 [2024-11-19 10:27:02.168376] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:48.439 [2024-11-19 10:27:02.168433] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:48.439 [2024-11-19 10:27:02.168654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.439 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:48.440 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.440 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.440 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.440 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.440 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.440 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.440 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.440 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.440 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.700 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.700 "name": "raid_bdev1", 00:15:48.700 "uuid": "9947a1e3-d36f-4edc-987f-56d93cc9d7a8", 00:15:48.700 "strip_size_kb": 64, 00:15:48.700 "state": "online", 00:15:48.700 "raid_level": "raid5f", 00:15:48.700 "superblock": true, 00:15:48.700 "num_base_bdevs": 4, 00:15:48.700 "num_base_bdevs_discovered": 4, 00:15:48.700 "num_base_bdevs_operational": 4, 00:15:48.700 "base_bdevs_list": [ 00:15:48.700 { 00:15:48.700 "name": "pt1", 00:15:48.700 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:48.700 "is_configured": true, 00:15:48.700 "data_offset": 2048, 00:15:48.700 "data_size": 63488 00:15:48.700 }, 00:15:48.700 { 00:15:48.700 "name": "pt2", 00:15:48.700 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.700 "is_configured": true, 00:15:48.700 "data_offset": 2048, 00:15:48.700 "data_size": 63488 00:15:48.700 }, 00:15:48.700 { 00:15:48.700 "name": "pt3", 00:15:48.700 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:48.700 "is_configured": true, 00:15:48.700 "data_offset": 2048, 00:15:48.700 "data_size": 63488 00:15:48.700 }, 00:15:48.700 { 00:15:48.700 "name": "pt4", 00:15:48.700 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:48.700 "is_configured": true, 00:15:48.700 "data_offset": 2048, 00:15:48.700 "data_size": 63488 00:15:48.700 } 00:15:48.700 ] 00:15:48.700 }' 00:15:48.700 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.700 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.961 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:48.961 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:48.961 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:48.961 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:48.961 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:48.961 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:48.961 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:48.961 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:48.961 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.961 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.961 [2024-11-19 10:27:02.600131] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:48.961 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.961 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:48.961 "name": "raid_bdev1", 00:15:48.961 "aliases": [ 00:15:48.961 "9947a1e3-d36f-4edc-987f-56d93cc9d7a8" 00:15:48.961 ], 00:15:48.961 "product_name": "Raid Volume", 00:15:48.961 "block_size": 512, 00:15:48.961 "num_blocks": 190464, 00:15:48.961 "uuid": "9947a1e3-d36f-4edc-987f-56d93cc9d7a8", 00:15:48.961 "assigned_rate_limits": { 00:15:48.961 "rw_ios_per_sec": 0, 00:15:48.961 "rw_mbytes_per_sec": 0, 00:15:48.961 "r_mbytes_per_sec": 0, 00:15:48.961 "w_mbytes_per_sec": 0 00:15:48.961 }, 00:15:48.961 "claimed": false, 00:15:48.961 "zoned": false, 00:15:48.961 "supported_io_types": { 00:15:48.961 "read": true, 00:15:48.961 "write": true, 00:15:48.961 "unmap": false, 00:15:48.961 "flush": false, 00:15:48.961 "reset": true, 00:15:48.961 "nvme_admin": false, 00:15:48.961 "nvme_io": false, 00:15:48.961 "nvme_io_md": false, 00:15:48.961 "write_zeroes": true, 00:15:48.961 "zcopy": false, 00:15:48.961 "get_zone_info": false, 00:15:48.961 "zone_management": false, 00:15:48.961 "zone_append": false, 00:15:48.961 "compare": false, 00:15:48.961 "compare_and_write": false, 00:15:48.961 "abort": false, 00:15:48.961 "seek_hole": false, 00:15:48.961 "seek_data": false, 00:15:48.961 "copy": false, 00:15:48.961 "nvme_iov_md": false 00:15:48.961 }, 00:15:48.961 "driver_specific": { 00:15:48.961 "raid": { 00:15:48.961 "uuid": "9947a1e3-d36f-4edc-987f-56d93cc9d7a8", 00:15:48.961 "strip_size_kb": 64, 00:15:48.961 "state": "online", 00:15:48.961 "raid_level": "raid5f", 00:15:48.961 "superblock": true, 00:15:48.961 "num_base_bdevs": 4, 00:15:48.961 "num_base_bdevs_discovered": 4, 00:15:48.961 "num_base_bdevs_operational": 4, 00:15:48.961 "base_bdevs_list": [ 00:15:48.961 { 00:15:48.961 "name": "pt1", 00:15:48.961 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:48.961 "is_configured": true, 00:15:48.961 "data_offset": 2048, 00:15:48.961 "data_size": 63488 00:15:48.961 }, 00:15:48.961 { 00:15:48.961 "name": "pt2", 00:15:48.961 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:48.961 "is_configured": true, 00:15:48.961 "data_offset": 2048, 00:15:48.961 "data_size": 63488 00:15:48.961 }, 00:15:48.961 { 00:15:48.961 "name": "pt3", 00:15:48.961 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:48.961 "is_configured": true, 00:15:48.961 "data_offset": 2048, 00:15:48.961 "data_size": 63488 00:15:48.961 }, 00:15:48.961 { 00:15:48.961 "name": "pt4", 00:15:48.961 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:48.961 "is_configured": true, 00:15:48.961 "data_offset": 2048, 00:15:48.961 "data_size": 63488 00:15:48.961 } 00:15:48.961 ] 00:15:48.961 } 00:15:48.961 } 00:15:48.961 }' 00:15:48.961 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:48.961 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:48.961 pt2 00:15:48.961 pt3 00:15:48.961 pt4' 00:15:48.961 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.961 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:48.961 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:48.961 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:48.961 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.961 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.961 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:48.961 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.222 [2024-11-19 10:27:02.903570] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9947a1e3-d36f-4edc-987f-56d93cc9d7a8 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9947a1e3-d36f-4edc-987f-56d93cc9d7a8 ']' 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.222 [2024-11-19 10:27:02.943412] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:49.222 [2024-11-19 10:27:02.943467] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:49.222 [2024-11-19 10:27:02.943568] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:49.222 [2024-11-19 10:27:02.943655] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:49.222 [2024-11-19 10:27:02.943707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.222 10:27:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.482 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:49.482 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:49.482 10:27:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.482 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.482 [2024-11-19 10:27:03.111139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:49.482 [2024-11-19 10:27:03.112890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:49.482 [2024-11-19 10:27:03.112990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:49.482 [2024-11-19 10:27:03.113060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:49.482 [2024-11-19 10:27:03.113136] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:49.482 [2024-11-19 10:27:03.113210] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:49.482 [2024-11-19 10:27:03.113294] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:49.482 [2024-11-19 10:27:03.113343] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:49.482 [2024-11-19 10:27:03.113408] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:49.482 [2024-11-19 10:27:03.113430] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:49.482 request: 00:15:49.482 { 00:15:49.482 "name": "raid_bdev1", 00:15:49.482 "raid_level": "raid5f", 00:15:49.482 "base_bdevs": [ 00:15:49.482 "malloc1", 00:15:49.482 "malloc2", 00:15:49.482 "malloc3", 00:15:49.482 "malloc4" 00:15:49.482 ], 00:15:49.482 "strip_size_kb": 64, 00:15:49.482 "superblock": false, 00:15:49.482 "method": "bdev_raid_create", 00:15:49.482 "req_id": 1 00:15:49.482 } 00:15:49.482 Got JSON-RPC error response 00:15:49.482 response: 00:15:49.482 { 00:15:49.482 "code": -17, 00:15:49.483 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:49.483 } 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.483 [2024-11-19 10:27:03.167032] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:49.483 [2024-11-19 10:27:03.167110] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.483 [2024-11-19 10:27:03.167140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:49.483 [2024-11-19 10:27:03.167168] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.483 [2024-11-19 10:27:03.169158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.483 [2024-11-19 10:27:03.169240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:49.483 [2024-11-19 10:27:03.169321] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:49.483 [2024-11-19 10:27:03.169417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:49.483 pt1 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:49.483 "name": "raid_bdev1", 00:15:49.483 "uuid": "9947a1e3-d36f-4edc-987f-56d93cc9d7a8", 00:15:49.483 "strip_size_kb": 64, 00:15:49.483 "state": "configuring", 00:15:49.483 "raid_level": "raid5f", 00:15:49.483 "superblock": true, 00:15:49.483 "num_base_bdevs": 4, 00:15:49.483 "num_base_bdevs_discovered": 1, 00:15:49.483 "num_base_bdevs_operational": 4, 00:15:49.483 "base_bdevs_list": [ 00:15:49.483 { 00:15:49.483 "name": "pt1", 00:15:49.483 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:49.483 "is_configured": true, 00:15:49.483 "data_offset": 2048, 00:15:49.483 "data_size": 63488 00:15:49.483 }, 00:15:49.483 { 00:15:49.483 "name": null, 00:15:49.483 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:49.483 "is_configured": false, 00:15:49.483 "data_offset": 2048, 00:15:49.483 "data_size": 63488 00:15:49.483 }, 00:15:49.483 { 00:15:49.483 "name": null, 00:15:49.483 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:49.483 "is_configured": false, 00:15:49.483 "data_offset": 2048, 00:15:49.483 "data_size": 63488 00:15:49.483 }, 00:15:49.483 { 00:15:49.483 "name": null, 00:15:49.483 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:49.483 "is_configured": false, 00:15:49.483 "data_offset": 2048, 00:15:49.483 "data_size": 63488 00:15:49.483 } 00:15:49.483 ] 00:15:49.483 }' 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:49.483 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.054 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:50.054 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:50.054 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.054 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.054 [2024-11-19 10:27:03.622266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:50.054 [2024-11-19 10:27:03.622322] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.054 [2024-11-19 10:27:03.622337] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:50.054 [2024-11-19 10:27:03.622347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.054 [2024-11-19 10:27:03.622708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.054 [2024-11-19 10:27:03.622728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:50.054 [2024-11-19 10:27:03.622791] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:50.054 [2024-11-19 10:27:03.622811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:50.054 pt2 00:15:50.054 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.054 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:50.054 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.054 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.054 [2024-11-19 10:27:03.634251] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:50.054 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.054 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:50.054 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.054 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.054 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.054 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.054 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:50.054 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.054 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.054 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.054 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.054 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.054 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.054 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.054 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.054 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.054 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.054 "name": "raid_bdev1", 00:15:50.054 "uuid": "9947a1e3-d36f-4edc-987f-56d93cc9d7a8", 00:15:50.054 "strip_size_kb": 64, 00:15:50.054 "state": "configuring", 00:15:50.054 "raid_level": "raid5f", 00:15:50.054 "superblock": true, 00:15:50.054 "num_base_bdevs": 4, 00:15:50.054 "num_base_bdevs_discovered": 1, 00:15:50.054 "num_base_bdevs_operational": 4, 00:15:50.054 "base_bdevs_list": [ 00:15:50.054 { 00:15:50.054 "name": "pt1", 00:15:50.054 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:50.054 "is_configured": true, 00:15:50.054 "data_offset": 2048, 00:15:50.054 "data_size": 63488 00:15:50.054 }, 00:15:50.054 { 00:15:50.054 "name": null, 00:15:50.054 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.054 "is_configured": false, 00:15:50.054 "data_offset": 0, 00:15:50.054 "data_size": 63488 00:15:50.054 }, 00:15:50.054 { 00:15:50.054 "name": null, 00:15:50.054 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:50.054 "is_configured": false, 00:15:50.054 "data_offset": 2048, 00:15:50.054 "data_size": 63488 00:15:50.054 }, 00:15:50.054 { 00:15:50.054 "name": null, 00:15:50.054 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:50.054 "is_configured": false, 00:15:50.054 "data_offset": 2048, 00:15:50.054 "data_size": 63488 00:15:50.054 } 00:15:50.054 ] 00:15:50.054 }' 00:15:50.054 10:27:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.054 10:27:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.315 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:50.315 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:50.315 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:50.315 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.315 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.315 [2024-11-19 10:27:04.057507] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:50.315 [2024-11-19 10:27:04.057552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.315 [2024-11-19 10:27:04.057567] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:50.315 [2024-11-19 10:27:04.057575] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.315 [2024-11-19 10:27:04.057920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.315 [2024-11-19 10:27:04.057936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:50.315 [2024-11-19 10:27:04.058008] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:50.315 [2024-11-19 10:27:04.058027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:50.315 pt2 00:15:50.315 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.315 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:50.315 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:50.315 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:50.315 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.315 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.315 [2024-11-19 10:27:04.069492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:50.315 [2024-11-19 10:27:04.069575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.315 [2024-11-19 10:27:04.069606] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:50.315 [2024-11-19 10:27:04.069632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.315 [2024-11-19 10:27:04.069978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.315 [2024-11-19 10:27:04.070047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:50.315 [2024-11-19 10:27:04.070127] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:50.315 [2024-11-19 10:27:04.070171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:50.315 pt3 00:15:50.315 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.315 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:50.315 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:50.315 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:50.315 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.315 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.315 [2024-11-19 10:27:04.081448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:50.315 [2024-11-19 10:27:04.081541] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.315 [2024-11-19 10:27:04.081575] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:50.315 [2024-11-19 10:27:04.081601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.315 [2024-11-19 10:27:04.081966] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.315 [2024-11-19 10:27:04.082030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:50.315 [2024-11-19 10:27:04.082115] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:50.315 [2024-11-19 10:27:04.082160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:50.315 [2024-11-19 10:27:04.082307] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:50.315 [2024-11-19 10:27:04.082343] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:50.315 [2024-11-19 10:27:04.082571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:50.315 [2024-11-19 10:27:04.089487] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:50.315 [2024-11-19 10:27:04.089542] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:50.315 [2024-11-19 10:27:04.089742] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.315 pt4 00:15:50.315 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.315 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:50.316 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:50.316 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:50.316 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.316 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.316 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.316 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.316 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:50.316 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.316 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.316 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.575 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.575 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.575 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.575 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.575 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.575 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.575 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.575 "name": "raid_bdev1", 00:15:50.575 "uuid": "9947a1e3-d36f-4edc-987f-56d93cc9d7a8", 00:15:50.575 "strip_size_kb": 64, 00:15:50.575 "state": "online", 00:15:50.575 "raid_level": "raid5f", 00:15:50.575 "superblock": true, 00:15:50.575 "num_base_bdevs": 4, 00:15:50.575 "num_base_bdevs_discovered": 4, 00:15:50.575 "num_base_bdevs_operational": 4, 00:15:50.575 "base_bdevs_list": [ 00:15:50.575 { 00:15:50.575 "name": "pt1", 00:15:50.575 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:50.575 "is_configured": true, 00:15:50.575 "data_offset": 2048, 00:15:50.575 "data_size": 63488 00:15:50.575 }, 00:15:50.575 { 00:15:50.575 "name": "pt2", 00:15:50.575 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.575 "is_configured": true, 00:15:50.575 "data_offset": 2048, 00:15:50.575 "data_size": 63488 00:15:50.575 }, 00:15:50.575 { 00:15:50.575 "name": "pt3", 00:15:50.575 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:50.575 "is_configured": true, 00:15:50.575 "data_offset": 2048, 00:15:50.575 "data_size": 63488 00:15:50.575 }, 00:15:50.575 { 00:15:50.575 "name": "pt4", 00:15:50.575 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:50.575 "is_configured": true, 00:15:50.575 "data_offset": 2048, 00:15:50.575 "data_size": 63488 00:15:50.575 } 00:15:50.575 ] 00:15:50.575 }' 00:15:50.575 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.575 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.836 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:50.836 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:50.836 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:50.836 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:50.836 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:50.836 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:50.836 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:50.836 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:50.836 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.836 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.836 [2024-11-19 10:27:04.521102] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:50.836 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.836 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:50.836 "name": "raid_bdev1", 00:15:50.836 "aliases": [ 00:15:50.836 "9947a1e3-d36f-4edc-987f-56d93cc9d7a8" 00:15:50.836 ], 00:15:50.836 "product_name": "Raid Volume", 00:15:50.836 "block_size": 512, 00:15:50.836 "num_blocks": 190464, 00:15:50.836 "uuid": "9947a1e3-d36f-4edc-987f-56d93cc9d7a8", 00:15:50.836 "assigned_rate_limits": { 00:15:50.836 "rw_ios_per_sec": 0, 00:15:50.836 "rw_mbytes_per_sec": 0, 00:15:50.836 "r_mbytes_per_sec": 0, 00:15:50.836 "w_mbytes_per_sec": 0 00:15:50.836 }, 00:15:50.836 "claimed": false, 00:15:50.836 "zoned": false, 00:15:50.836 "supported_io_types": { 00:15:50.836 "read": true, 00:15:50.836 "write": true, 00:15:50.836 "unmap": false, 00:15:50.836 "flush": false, 00:15:50.836 "reset": true, 00:15:50.836 "nvme_admin": false, 00:15:50.836 "nvme_io": false, 00:15:50.836 "nvme_io_md": false, 00:15:50.836 "write_zeroes": true, 00:15:50.836 "zcopy": false, 00:15:50.836 "get_zone_info": false, 00:15:50.836 "zone_management": false, 00:15:50.836 "zone_append": false, 00:15:50.836 "compare": false, 00:15:50.836 "compare_and_write": false, 00:15:50.836 "abort": false, 00:15:50.836 "seek_hole": false, 00:15:50.836 "seek_data": false, 00:15:50.836 "copy": false, 00:15:50.836 "nvme_iov_md": false 00:15:50.836 }, 00:15:50.836 "driver_specific": { 00:15:50.836 "raid": { 00:15:50.836 "uuid": "9947a1e3-d36f-4edc-987f-56d93cc9d7a8", 00:15:50.836 "strip_size_kb": 64, 00:15:50.836 "state": "online", 00:15:50.836 "raid_level": "raid5f", 00:15:50.836 "superblock": true, 00:15:50.836 "num_base_bdevs": 4, 00:15:50.836 "num_base_bdevs_discovered": 4, 00:15:50.836 "num_base_bdevs_operational": 4, 00:15:50.836 "base_bdevs_list": [ 00:15:50.836 { 00:15:50.836 "name": "pt1", 00:15:50.836 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:50.836 "is_configured": true, 00:15:50.836 "data_offset": 2048, 00:15:50.836 "data_size": 63488 00:15:50.836 }, 00:15:50.836 { 00:15:50.836 "name": "pt2", 00:15:50.836 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:50.836 "is_configured": true, 00:15:50.836 "data_offset": 2048, 00:15:50.836 "data_size": 63488 00:15:50.836 }, 00:15:50.836 { 00:15:50.836 "name": "pt3", 00:15:50.836 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:50.836 "is_configured": true, 00:15:50.836 "data_offset": 2048, 00:15:50.836 "data_size": 63488 00:15:50.836 }, 00:15:50.836 { 00:15:50.836 "name": "pt4", 00:15:50.836 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:50.836 "is_configured": true, 00:15:50.836 "data_offset": 2048, 00:15:50.836 "data_size": 63488 00:15:50.836 } 00:15:50.836 ] 00:15:50.836 } 00:15:50.836 } 00:15:50.836 }' 00:15:50.836 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:50.836 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:50.836 pt2 00:15:50.836 pt3 00:15:50.836 pt4' 00:15:50.836 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.097 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:51.097 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.097 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:51.097 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.097 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.097 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.097 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.097 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:51.097 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:51.097 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.097 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.097 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:51.097 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.097 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.097 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.097 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:51.097 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:51.097 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.097 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.097 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:51.097 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.097 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.097 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.097 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:51.097 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:51.097 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:51.097 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:51.097 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:51.097 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.098 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.098 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.098 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:51.098 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:51.098 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:51.098 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.098 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.098 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:51.098 [2024-11-19 10:27:04.852473] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:51.098 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.357 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9947a1e3-d36f-4edc-987f-56d93cc9d7a8 '!=' 9947a1e3-d36f-4edc-987f-56d93cc9d7a8 ']' 00:15:51.357 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:51.357 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:51.357 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:51.357 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:51.357 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.357 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.357 [2024-11-19 10:27:04.900280] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:51.357 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.357 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:51.357 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.357 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.357 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.357 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.357 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.357 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.357 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.357 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.357 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.357 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.357 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.357 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.357 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.357 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.357 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.357 "name": "raid_bdev1", 00:15:51.357 "uuid": "9947a1e3-d36f-4edc-987f-56d93cc9d7a8", 00:15:51.357 "strip_size_kb": 64, 00:15:51.357 "state": "online", 00:15:51.357 "raid_level": "raid5f", 00:15:51.357 "superblock": true, 00:15:51.357 "num_base_bdevs": 4, 00:15:51.357 "num_base_bdevs_discovered": 3, 00:15:51.358 "num_base_bdevs_operational": 3, 00:15:51.358 "base_bdevs_list": [ 00:15:51.358 { 00:15:51.358 "name": null, 00:15:51.358 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.358 "is_configured": false, 00:15:51.358 "data_offset": 0, 00:15:51.358 "data_size": 63488 00:15:51.358 }, 00:15:51.358 { 00:15:51.358 "name": "pt2", 00:15:51.358 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.358 "is_configured": true, 00:15:51.358 "data_offset": 2048, 00:15:51.358 "data_size": 63488 00:15:51.358 }, 00:15:51.358 { 00:15:51.358 "name": "pt3", 00:15:51.358 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:51.358 "is_configured": true, 00:15:51.358 "data_offset": 2048, 00:15:51.358 "data_size": 63488 00:15:51.358 }, 00:15:51.358 { 00:15:51.358 "name": "pt4", 00:15:51.358 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:51.358 "is_configured": true, 00:15:51.358 "data_offset": 2048, 00:15:51.358 "data_size": 63488 00:15:51.358 } 00:15:51.358 ] 00:15:51.358 }' 00:15:51.358 10:27:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.358 10:27:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.619 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:51.619 10:27:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.619 10:27:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.619 [2024-11-19 10:27:05.351446] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:51.619 [2024-11-19 10:27:05.351509] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:51.619 [2024-11-19 10:27:05.351580] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:51.619 [2024-11-19 10:27:05.351666] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:51.619 [2024-11-19 10:27:05.351731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:51.619 10:27:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.619 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:51.619 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.619 10:27:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.619 10:27:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.619 10:27:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.619 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:51.619 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:51.619 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:51.619 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:51.619 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:51.619 10:27:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.619 10:27:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.879 10:27:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.879 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:51.879 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:51.879 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:51.879 10:27:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.879 10:27:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.879 10:27:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.879 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:51.879 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:51.879 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:15:51.879 10:27:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.879 10:27:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.879 10:27:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.879 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:51.879 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:51.879 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:51.879 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:51.879 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:51.879 10:27:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.879 10:27:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.879 [2024-11-19 10:27:05.435428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:51.879 [2024-11-19 10:27:05.435508] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.879 [2024-11-19 10:27:05.435541] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:51.879 [2024-11-19 10:27:05.435566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.879 [2024-11-19 10:27:05.437620] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.879 [2024-11-19 10:27:05.437700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:51.879 [2024-11-19 10:27:05.437785] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:51.879 [2024-11-19 10:27:05.437860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:51.879 pt2 00:15:51.879 10:27:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.879 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:51.880 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.880 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.880 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.880 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.880 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.880 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.880 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.880 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.880 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.880 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.880 10:27:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.880 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.880 10:27:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.880 10:27:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.880 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.880 "name": "raid_bdev1", 00:15:51.880 "uuid": "9947a1e3-d36f-4edc-987f-56d93cc9d7a8", 00:15:51.880 "strip_size_kb": 64, 00:15:51.880 "state": "configuring", 00:15:51.880 "raid_level": "raid5f", 00:15:51.880 "superblock": true, 00:15:51.880 "num_base_bdevs": 4, 00:15:51.880 "num_base_bdevs_discovered": 1, 00:15:51.880 "num_base_bdevs_operational": 3, 00:15:51.880 "base_bdevs_list": [ 00:15:51.880 { 00:15:51.880 "name": null, 00:15:51.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.880 "is_configured": false, 00:15:51.880 "data_offset": 2048, 00:15:51.880 "data_size": 63488 00:15:51.880 }, 00:15:51.880 { 00:15:51.880 "name": "pt2", 00:15:51.880 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:51.880 "is_configured": true, 00:15:51.880 "data_offset": 2048, 00:15:51.880 "data_size": 63488 00:15:51.880 }, 00:15:51.880 { 00:15:51.880 "name": null, 00:15:51.880 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:51.880 "is_configured": false, 00:15:51.880 "data_offset": 2048, 00:15:51.880 "data_size": 63488 00:15:51.880 }, 00:15:51.880 { 00:15:51.880 "name": null, 00:15:51.880 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:51.880 "is_configured": false, 00:15:51.880 "data_offset": 2048, 00:15:51.880 "data_size": 63488 00:15:51.880 } 00:15:51.880 ] 00:15:51.880 }' 00:15:51.880 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.880 10:27:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.140 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:52.140 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:52.141 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:52.141 10:27:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.141 10:27:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.141 [2024-11-19 10:27:05.890783] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:52.141 [2024-11-19 10:27:05.890888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.141 [2024-11-19 10:27:05.890923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:52.141 [2024-11-19 10:27:05.890949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.141 [2024-11-19 10:27:05.891365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.141 [2024-11-19 10:27:05.891424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:52.141 [2024-11-19 10:27:05.891522] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:52.141 [2024-11-19 10:27:05.891579] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:52.141 pt3 00:15:52.141 10:27:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.141 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:52.141 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.141 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:52.141 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.141 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.141 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:52.141 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.141 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.141 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.141 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.141 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.141 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.141 10:27:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.141 10:27:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.401 10:27:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.401 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.401 "name": "raid_bdev1", 00:15:52.401 "uuid": "9947a1e3-d36f-4edc-987f-56d93cc9d7a8", 00:15:52.401 "strip_size_kb": 64, 00:15:52.401 "state": "configuring", 00:15:52.401 "raid_level": "raid5f", 00:15:52.401 "superblock": true, 00:15:52.401 "num_base_bdevs": 4, 00:15:52.401 "num_base_bdevs_discovered": 2, 00:15:52.401 "num_base_bdevs_operational": 3, 00:15:52.401 "base_bdevs_list": [ 00:15:52.401 { 00:15:52.401 "name": null, 00:15:52.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.401 "is_configured": false, 00:15:52.401 "data_offset": 2048, 00:15:52.401 "data_size": 63488 00:15:52.401 }, 00:15:52.401 { 00:15:52.401 "name": "pt2", 00:15:52.401 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:52.401 "is_configured": true, 00:15:52.401 "data_offset": 2048, 00:15:52.401 "data_size": 63488 00:15:52.401 }, 00:15:52.401 { 00:15:52.401 "name": "pt3", 00:15:52.401 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:52.401 "is_configured": true, 00:15:52.401 "data_offset": 2048, 00:15:52.401 "data_size": 63488 00:15:52.401 }, 00:15:52.401 { 00:15:52.401 "name": null, 00:15:52.401 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:52.401 "is_configured": false, 00:15:52.401 "data_offset": 2048, 00:15:52.402 "data_size": 63488 00:15:52.402 } 00:15:52.402 ] 00:15:52.402 }' 00:15:52.402 10:27:05 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.402 10:27:05 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.662 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:52.662 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:52.662 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:15:52.662 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:52.662 10:27:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.662 10:27:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.662 [2024-11-19 10:27:06.377946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:52.662 [2024-11-19 10:27:06.378053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:52.662 [2024-11-19 10:27:06.378088] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:52.662 [2024-11-19 10:27:06.378116] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:52.662 [2024-11-19 10:27:06.378495] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:52.662 [2024-11-19 10:27:06.378548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:52.662 [2024-11-19 10:27:06.378634] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:52.662 [2024-11-19 10:27:06.378679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:52.662 [2024-11-19 10:27:06.378812] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:52.662 [2024-11-19 10:27:06.378847] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:52.662 [2024-11-19 10:27:06.379094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:52.662 [2024-11-19 10:27:06.385480] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:52.662 [2024-11-19 10:27:06.385539] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:52.662 [2024-11-19 10:27:06.385834] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.662 pt4 00:15:52.662 10:27:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.662 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:52.662 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.662 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.662 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.662 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.662 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:52.662 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.662 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.662 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.662 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.663 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.663 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.663 10:27:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.663 10:27:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.663 10:27:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.663 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.663 "name": "raid_bdev1", 00:15:52.663 "uuid": "9947a1e3-d36f-4edc-987f-56d93cc9d7a8", 00:15:52.663 "strip_size_kb": 64, 00:15:52.663 "state": "online", 00:15:52.663 "raid_level": "raid5f", 00:15:52.663 "superblock": true, 00:15:52.663 "num_base_bdevs": 4, 00:15:52.663 "num_base_bdevs_discovered": 3, 00:15:52.663 "num_base_bdevs_operational": 3, 00:15:52.663 "base_bdevs_list": [ 00:15:52.663 { 00:15:52.663 "name": null, 00:15:52.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.663 "is_configured": false, 00:15:52.663 "data_offset": 2048, 00:15:52.663 "data_size": 63488 00:15:52.663 }, 00:15:52.663 { 00:15:52.663 "name": "pt2", 00:15:52.663 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:52.663 "is_configured": true, 00:15:52.663 "data_offset": 2048, 00:15:52.663 "data_size": 63488 00:15:52.663 }, 00:15:52.663 { 00:15:52.663 "name": "pt3", 00:15:52.663 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:52.663 "is_configured": true, 00:15:52.663 "data_offset": 2048, 00:15:52.663 "data_size": 63488 00:15:52.663 }, 00:15:52.663 { 00:15:52.663 "name": "pt4", 00:15:52.663 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:52.663 "is_configured": true, 00:15:52.663 "data_offset": 2048, 00:15:52.663 "data_size": 63488 00:15:52.663 } 00:15:52.663 ] 00:15:52.663 }' 00:15:52.663 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.663 10:27:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.233 [2024-11-19 10:27:06.805614] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:53.233 [2024-11-19 10:27:06.805680] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:53.233 [2024-11-19 10:27:06.805768] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:53.233 [2024-11-19 10:27:06.805851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:53.233 [2024-11-19 10:27:06.805903] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.233 [2024-11-19 10:27:06.881473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:53.233 [2024-11-19 10:27:06.881578] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.233 [2024-11-19 10:27:06.881619] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:15:53.233 [2024-11-19 10:27:06.881652] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.233 [2024-11-19 10:27:06.883826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.233 [2024-11-19 10:27:06.883902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:53.233 [2024-11-19 10:27:06.884020] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:53.233 [2024-11-19 10:27:06.884102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:53.233 [2024-11-19 10:27:06.884278] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:53.233 [2024-11-19 10:27:06.884333] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:53.233 [2024-11-19 10:27:06.884368] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:53.233 [2024-11-19 10:27:06.884506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:53.233 [2024-11-19 10:27:06.884643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:53.233 pt1 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.233 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.234 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.234 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.234 10:27:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.234 10:27:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.234 10:27:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.234 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.234 "name": "raid_bdev1", 00:15:53.234 "uuid": "9947a1e3-d36f-4edc-987f-56d93cc9d7a8", 00:15:53.234 "strip_size_kb": 64, 00:15:53.234 "state": "configuring", 00:15:53.234 "raid_level": "raid5f", 00:15:53.234 "superblock": true, 00:15:53.234 "num_base_bdevs": 4, 00:15:53.234 "num_base_bdevs_discovered": 2, 00:15:53.234 "num_base_bdevs_operational": 3, 00:15:53.234 "base_bdevs_list": [ 00:15:53.234 { 00:15:53.234 "name": null, 00:15:53.234 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.234 "is_configured": false, 00:15:53.234 "data_offset": 2048, 00:15:53.234 "data_size": 63488 00:15:53.234 }, 00:15:53.234 { 00:15:53.234 "name": "pt2", 00:15:53.234 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:53.234 "is_configured": true, 00:15:53.234 "data_offset": 2048, 00:15:53.234 "data_size": 63488 00:15:53.234 }, 00:15:53.234 { 00:15:53.234 "name": "pt3", 00:15:53.234 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:53.234 "is_configured": true, 00:15:53.234 "data_offset": 2048, 00:15:53.234 "data_size": 63488 00:15:53.234 }, 00:15:53.234 { 00:15:53.234 "name": null, 00:15:53.234 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:53.234 "is_configured": false, 00:15:53.234 "data_offset": 2048, 00:15:53.234 "data_size": 63488 00:15:53.234 } 00:15:53.234 ] 00:15:53.234 }' 00:15:53.234 10:27:06 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.234 10:27:06 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.832 10:27:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:53.832 10:27:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.832 10:27:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:53.832 10:27:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.832 10:27:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.832 10:27:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:53.832 10:27:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:53.832 10:27:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.832 10:27:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.832 [2024-11-19 10:27:07.388708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:53.832 [2024-11-19 10:27:07.388792] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.832 [2024-11-19 10:27:07.388829] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:53.832 [2024-11-19 10:27:07.388857] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.832 [2024-11-19 10:27:07.389263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.832 [2024-11-19 10:27:07.389326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:53.832 [2024-11-19 10:27:07.389426] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:53.832 [2024-11-19 10:27:07.389485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:53.832 [2024-11-19 10:27:07.389638] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:53.832 [2024-11-19 10:27:07.389676] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:53.832 [2024-11-19 10:27:07.389927] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:53.832 [2024-11-19 10:27:07.396561] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:53.832 [2024-11-19 10:27:07.396621] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:53.832 [2024-11-19 10:27:07.396915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:53.832 pt4 00:15:53.832 10:27:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.832 10:27:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:53.832 10:27:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:53.832 10:27:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.832 10:27:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.832 10:27:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.832 10:27:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:53.832 10:27:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.832 10:27:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.832 10:27:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.832 10:27:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.832 10:27:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.832 10:27:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.832 10:27:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.832 10:27:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.832 10:27:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.832 10:27:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.832 "name": "raid_bdev1", 00:15:53.832 "uuid": "9947a1e3-d36f-4edc-987f-56d93cc9d7a8", 00:15:53.832 "strip_size_kb": 64, 00:15:53.832 "state": "online", 00:15:53.832 "raid_level": "raid5f", 00:15:53.832 "superblock": true, 00:15:53.832 "num_base_bdevs": 4, 00:15:53.832 "num_base_bdevs_discovered": 3, 00:15:53.832 "num_base_bdevs_operational": 3, 00:15:53.832 "base_bdevs_list": [ 00:15:53.832 { 00:15:53.832 "name": null, 00:15:53.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.832 "is_configured": false, 00:15:53.832 "data_offset": 2048, 00:15:53.832 "data_size": 63488 00:15:53.832 }, 00:15:53.832 { 00:15:53.832 "name": "pt2", 00:15:53.832 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:53.832 "is_configured": true, 00:15:53.832 "data_offset": 2048, 00:15:53.832 "data_size": 63488 00:15:53.832 }, 00:15:53.832 { 00:15:53.832 "name": "pt3", 00:15:53.832 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:53.832 "is_configured": true, 00:15:53.832 "data_offset": 2048, 00:15:53.832 "data_size": 63488 00:15:53.832 }, 00:15:53.832 { 00:15:53.832 "name": "pt4", 00:15:53.832 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:53.832 "is_configured": true, 00:15:53.832 "data_offset": 2048, 00:15:53.832 "data_size": 63488 00:15:53.832 } 00:15:53.832 ] 00:15:53.832 }' 00:15:53.832 10:27:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.832 10:27:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.092 10:27:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:54.092 10:27:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.092 10:27:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.092 10:27:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:54.092 10:27:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.092 10:27:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:54.092 10:27:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:54.092 10:27:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.092 10:27:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.092 10:27:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:54.092 [2024-11-19 10:27:07.856905] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:54.092 10:27:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.352 10:27:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 9947a1e3-d36f-4edc-987f-56d93cc9d7a8 '!=' 9947a1e3-d36f-4edc-987f-56d93cc9d7a8 ']' 00:15:54.352 10:27:07 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83786 00:15:54.352 10:27:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 83786 ']' 00:15:54.352 10:27:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 83786 00:15:54.352 10:27:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:54.352 10:27:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:54.352 10:27:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83786 00:15:54.352 killing process with pid 83786 00:15:54.352 10:27:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:54.352 10:27:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:54.352 10:27:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83786' 00:15:54.352 10:27:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 83786 00:15:54.353 [2024-11-19 10:27:07.945221] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:54.353 [2024-11-19 10:27:07.945301] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:54.353 [2024-11-19 10:27:07.945371] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:54.353 [2024-11-19 10:27:07.945383] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:54.353 10:27:07 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 83786 00:15:54.612 [2024-11-19 10:27:08.313950] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:55.994 10:27:09 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:55.994 00:15:55.994 real 0m8.353s 00:15:55.994 user 0m13.217s 00:15:55.994 sys 0m1.523s 00:15:55.994 10:27:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:55.994 ************************************ 00:15:55.994 END TEST raid5f_superblock_test 00:15:55.994 ************************************ 00:15:55.994 10:27:09 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.994 10:27:09 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:55.994 10:27:09 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:15:55.994 10:27:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:55.994 10:27:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:55.994 10:27:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:55.994 ************************************ 00:15:55.994 START TEST raid5f_rebuild_test 00:15:55.994 ************************************ 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84269 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84269 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84269 ']' 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.994 10:27:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:55.994 [2024-11-19 10:27:09.516447] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:15:55.994 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:55.994 Zero copy mechanism will not be used. 00:15:55.994 [2024-11-19 10:27:09.516658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84269 ] 00:15:55.994 [2024-11-19 10:27:09.690918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.254 [2024-11-19 10:27:09.799749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.254 [2024-11-19 10:27:09.985913] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:56.254 [2024-11-19 10:27:09.985967] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.824 BaseBdev1_malloc 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.824 [2024-11-19 10:27:10.368696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:56.824 [2024-11-19 10:27:10.368833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.824 [2024-11-19 10:27:10.368872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:56.824 [2024-11-19 10:27:10.368904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.824 [2024-11-19 10:27:10.370923] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.824 [2024-11-19 10:27:10.371003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:56.824 BaseBdev1 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.824 BaseBdev2_malloc 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.824 [2024-11-19 10:27:10.422283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:56.824 [2024-11-19 10:27:10.422406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.824 [2024-11-19 10:27:10.422439] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:56.824 [2024-11-19 10:27:10.422471] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.824 [2024-11-19 10:27:10.424469] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.824 [2024-11-19 10:27:10.424553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:56.824 BaseBdev2 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.824 BaseBdev3_malloc 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.824 [2024-11-19 10:27:10.511144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:56.824 [2024-11-19 10:27:10.511244] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.824 [2024-11-19 10:27:10.511281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:56.824 [2024-11-19 10:27:10.511310] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.824 [2024-11-19 10:27:10.513336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.824 [2024-11-19 10:27:10.513422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:56.824 BaseBdev3 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.824 BaseBdev4_malloc 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:56.824 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.825 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.825 [2024-11-19 10:27:10.563632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:56.825 [2024-11-19 10:27:10.563680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.825 [2024-11-19 10:27:10.563697] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:56.825 [2024-11-19 10:27:10.563708] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.825 [2024-11-19 10:27:10.565681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.825 [2024-11-19 10:27:10.565731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:56.825 BaseBdev4 00:15:56.825 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.825 10:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:56.825 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.825 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.085 spare_malloc 00:15:57.085 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.085 10:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:57.085 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.085 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.085 spare_delay 00:15:57.085 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.085 10:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:57.085 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.085 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.085 [2024-11-19 10:27:10.628473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:57.085 [2024-11-19 10:27:10.628603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:57.085 [2024-11-19 10:27:10.628640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:57.085 [2024-11-19 10:27:10.628672] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:57.085 [2024-11-19 10:27:10.630669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:57.085 [2024-11-19 10:27:10.630741] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:57.085 spare 00:15:57.085 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.085 10:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:57.085 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.085 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.085 [2024-11-19 10:27:10.640509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:57.085 [2024-11-19 10:27:10.642118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:57.085 [2024-11-19 10:27:10.642168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:57.085 [2024-11-19 10:27:10.642211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:57.085 [2024-11-19 10:27:10.642288] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:57.085 [2024-11-19 10:27:10.642299] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:57.085 [2024-11-19 10:27:10.642513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:57.085 [2024-11-19 10:27:10.649501] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:57.085 [2024-11-19 10:27:10.649520] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:57.085 [2024-11-19 10:27:10.649710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:57.085 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.085 10:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:57.086 10:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.086 10:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.086 10:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.086 10:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.086 10:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:57.086 10:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.086 10:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.086 10:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.086 10:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.086 10:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.086 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.086 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.086 10:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.086 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.086 10:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.086 "name": "raid_bdev1", 00:15:57.086 "uuid": "7b4a15a4-40fb-4ece-898c-e529d28975e1", 00:15:57.086 "strip_size_kb": 64, 00:15:57.086 "state": "online", 00:15:57.086 "raid_level": "raid5f", 00:15:57.086 "superblock": false, 00:15:57.086 "num_base_bdevs": 4, 00:15:57.086 "num_base_bdevs_discovered": 4, 00:15:57.086 "num_base_bdevs_operational": 4, 00:15:57.086 "base_bdevs_list": [ 00:15:57.086 { 00:15:57.086 "name": "BaseBdev1", 00:15:57.086 "uuid": "7f4dad2e-83dd-59df-a790-c7e42eace355", 00:15:57.086 "is_configured": true, 00:15:57.086 "data_offset": 0, 00:15:57.086 "data_size": 65536 00:15:57.086 }, 00:15:57.086 { 00:15:57.086 "name": "BaseBdev2", 00:15:57.086 "uuid": "b7ec91cf-8cf3-537a-b380-6936470bb6a5", 00:15:57.086 "is_configured": true, 00:15:57.086 "data_offset": 0, 00:15:57.086 "data_size": 65536 00:15:57.086 }, 00:15:57.086 { 00:15:57.086 "name": "BaseBdev3", 00:15:57.086 "uuid": "39423ab3-eba8-5724-ba3f-6f75129ff0b3", 00:15:57.086 "is_configured": true, 00:15:57.086 "data_offset": 0, 00:15:57.086 "data_size": 65536 00:15:57.086 }, 00:15:57.086 { 00:15:57.086 "name": "BaseBdev4", 00:15:57.086 "uuid": "80a5ac4c-0201-598a-bb7c-d6deec3d7276", 00:15:57.086 "is_configured": true, 00:15:57.086 "data_offset": 0, 00:15:57.086 "data_size": 65536 00:15:57.086 } 00:15:57.086 ] 00:15:57.086 }' 00:15:57.086 10:27:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.086 10:27:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.346 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:57.346 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:57.346 10:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.346 10:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.346 [2024-11-19 10:27:11.069492] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:57.346 10:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.346 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:15:57.346 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.346 10:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.346 10:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.346 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:57.346 10:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.606 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:57.606 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:57.606 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:57.606 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:57.606 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:57.606 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:57.606 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:57.606 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:57.606 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:57.606 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:57.606 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:57.606 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:57.606 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:57.606 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:57.606 [2024-11-19 10:27:11.324945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:57.606 /dev/nbd0 00:15:57.606 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:57.606 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:57.606 10:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:57.606 10:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:57.606 10:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:57.606 10:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:57.606 10:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:57.606 10:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:57.606 10:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:57.606 10:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:57.606 10:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:57.606 1+0 records in 00:15:57.606 1+0 records out 00:15:57.606 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350234 s, 11.7 MB/s 00:15:57.606 10:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.866 10:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:57.866 10:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:57.866 10:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:57.866 10:27:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:57.866 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:57.866 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:57.866 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:57.866 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:57.866 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:57.866 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:15:58.127 512+0 records in 00:15:58.127 512+0 records out 00:15:58.127 100663296 bytes (101 MB, 96 MiB) copied, 0.454429 s, 222 MB/s 00:15:58.127 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:58.127 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:58.127 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:58.127 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:58.127 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:58.127 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:58.127 10:27:11 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:58.394 10:27:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:58.394 [2024-11-19 10:27:12.057163] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.394 10:27:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:58.394 10:27:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:58.394 10:27:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:58.394 10:27:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:58.394 10:27:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:58.394 10:27:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:58.394 10:27:12 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:58.394 10:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:58.394 10:27:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.394 10:27:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.394 [2024-11-19 10:27:12.070652] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:58.394 10:27:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.394 10:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:58.394 10:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:58.394 10:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.394 10:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.394 10:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.394 10:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:58.394 10:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.394 10:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.394 10:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.394 10:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.394 10:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.394 10:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.394 10:27:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.394 10:27:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.394 10:27:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.394 10:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.394 "name": "raid_bdev1", 00:15:58.394 "uuid": "7b4a15a4-40fb-4ece-898c-e529d28975e1", 00:15:58.394 "strip_size_kb": 64, 00:15:58.394 "state": "online", 00:15:58.394 "raid_level": "raid5f", 00:15:58.394 "superblock": false, 00:15:58.394 "num_base_bdevs": 4, 00:15:58.394 "num_base_bdevs_discovered": 3, 00:15:58.394 "num_base_bdevs_operational": 3, 00:15:58.394 "base_bdevs_list": [ 00:15:58.394 { 00:15:58.394 "name": null, 00:15:58.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.394 "is_configured": false, 00:15:58.394 "data_offset": 0, 00:15:58.394 "data_size": 65536 00:15:58.394 }, 00:15:58.394 { 00:15:58.394 "name": "BaseBdev2", 00:15:58.394 "uuid": "b7ec91cf-8cf3-537a-b380-6936470bb6a5", 00:15:58.394 "is_configured": true, 00:15:58.394 "data_offset": 0, 00:15:58.394 "data_size": 65536 00:15:58.394 }, 00:15:58.394 { 00:15:58.394 "name": "BaseBdev3", 00:15:58.394 "uuid": "39423ab3-eba8-5724-ba3f-6f75129ff0b3", 00:15:58.394 "is_configured": true, 00:15:58.394 "data_offset": 0, 00:15:58.394 "data_size": 65536 00:15:58.394 }, 00:15:58.394 { 00:15:58.394 "name": "BaseBdev4", 00:15:58.394 "uuid": "80a5ac4c-0201-598a-bb7c-d6deec3d7276", 00:15:58.394 "is_configured": true, 00:15:58.394 "data_offset": 0, 00:15:58.394 "data_size": 65536 00:15:58.394 } 00:15:58.394 ] 00:15:58.394 }' 00:15:58.394 10:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.394 10:27:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.964 10:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:58.964 10:27:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.964 10:27:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.964 [2024-11-19 10:27:12.537822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:58.964 [2024-11-19 10:27:12.552464] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:58.964 10:27:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.964 10:27:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:58.964 [2024-11-19 10:27:12.561293] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:59.902 10:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:59.902 10:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.902 10:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:59.902 10:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:59.902 10:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.902 10:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.902 10:27:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.902 10:27:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.902 10:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.902 10:27:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.902 10:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.902 "name": "raid_bdev1", 00:15:59.902 "uuid": "7b4a15a4-40fb-4ece-898c-e529d28975e1", 00:15:59.902 "strip_size_kb": 64, 00:15:59.902 "state": "online", 00:15:59.902 "raid_level": "raid5f", 00:15:59.902 "superblock": false, 00:15:59.902 "num_base_bdevs": 4, 00:15:59.902 "num_base_bdevs_discovered": 4, 00:15:59.902 "num_base_bdevs_operational": 4, 00:15:59.902 "process": { 00:15:59.902 "type": "rebuild", 00:15:59.902 "target": "spare", 00:15:59.902 "progress": { 00:15:59.902 "blocks": 19200, 00:15:59.902 "percent": 9 00:15:59.902 } 00:15:59.902 }, 00:15:59.902 "base_bdevs_list": [ 00:15:59.902 { 00:15:59.902 "name": "spare", 00:15:59.902 "uuid": "444fa162-d481-5bd7-85bb-e0437017d251", 00:15:59.902 "is_configured": true, 00:15:59.902 "data_offset": 0, 00:15:59.902 "data_size": 65536 00:15:59.902 }, 00:15:59.902 { 00:15:59.902 "name": "BaseBdev2", 00:15:59.902 "uuid": "b7ec91cf-8cf3-537a-b380-6936470bb6a5", 00:15:59.902 "is_configured": true, 00:15:59.902 "data_offset": 0, 00:15:59.902 "data_size": 65536 00:15:59.902 }, 00:15:59.902 { 00:15:59.902 "name": "BaseBdev3", 00:15:59.902 "uuid": "39423ab3-eba8-5724-ba3f-6f75129ff0b3", 00:15:59.902 "is_configured": true, 00:15:59.902 "data_offset": 0, 00:15:59.902 "data_size": 65536 00:15:59.902 }, 00:15:59.902 { 00:15:59.902 "name": "BaseBdev4", 00:15:59.902 "uuid": "80a5ac4c-0201-598a-bb7c-d6deec3d7276", 00:15:59.902 "is_configured": true, 00:15:59.902 "data_offset": 0, 00:15:59.902 "data_size": 65536 00:15:59.902 } 00:15:59.902 ] 00:15:59.902 }' 00:15:59.902 10:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.902 10:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:59.902 10:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.162 10:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:00.162 10:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:00.162 10:27:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.162 10:27:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.162 [2024-11-19 10:27:13.715894] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:00.162 [2024-11-19 10:27:13.766688] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:00.162 [2024-11-19 10:27:13.766793] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:00.162 [2024-11-19 10:27:13.766829] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:00.162 [2024-11-19 10:27:13.766853] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:00.162 10:27:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.162 10:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:00.162 10:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:00.162 10:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:00.162 10:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.162 10:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.162 10:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:00.162 10:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.162 10:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.162 10:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.162 10:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.162 10:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.162 10:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.162 10:27:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.162 10:27:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.162 10:27:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.162 10:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.162 "name": "raid_bdev1", 00:16:00.162 "uuid": "7b4a15a4-40fb-4ece-898c-e529d28975e1", 00:16:00.162 "strip_size_kb": 64, 00:16:00.162 "state": "online", 00:16:00.162 "raid_level": "raid5f", 00:16:00.162 "superblock": false, 00:16:00.162 "num_base_bdevs": 4, 00:16:00.162 "num_base_bdevs_discovered": 3, 00:16:00.162 "num_base_bdevs_operational": 3, 00:16:00.162 "base_bdevs_list": [ 00:16:00.162 { 00:16:00.162 "name": null, 00:16:00.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.162 "is_configured": false, 00:16:00.162 "data_offset": 0, 00:16:00.162 "data_size": 65536 00:16:00.162 }, 00:16:00.162 { 00:16:00.162 "name": "BaseBdev2", 00:16:00.162 "uuid": "b7ec91cf-8cf3-537a-b380-6936470bb6a5", 00:16:00.162 "is_configured": true, 00:16:00.162 "data_offset": 0, 00:16:00.162 "data_size": 65536 00:16:00.162 }, 00:16:00.162 { 00:16:00.162 "name": "BaseBdev3", 00:16:00.162 "uuid": "39423ab3-eba8-5724-ba3f-6f75129ff0b3", 00:16:00.162 "is_configured": true, 00:16:00.162 "data_offset": 0, 00:16:00.162 "data_size": 65536 00:16:00.162 }, 00:16:00.162 { 00:16:00.162 "name": "BaseBdev4", 00:16:00.162 "uuid": "80a5ac4c-0201-598a-bb7c-d6deec3d7276", 00:16:00.162 "is_configured": true, 00:16:00.162 "data_offset": 0, 00:16:00.162 "data_size": 65536 00:16:00.162 } 00:16:00.162 ] 00:16:00.162 }' 00:16:00.162 10:27:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.162 10:27:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.730 10:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:00.730 10:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.730 10:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:00.730 10:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:00.730 10:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.730 10:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.730 10:27:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.730 10:27:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.730 10:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.730 10:27:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.730 10:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.730 "name": "raid_bdev1", 00:16:00.730 "uuid": "7b4a15a4-40fb-4ece-898c-e529d28975e1", 00:16:00.730 "strip_size_kb": 64, 00:16:00.730 "state": "online", 00:16:00.730 "raid_level": "raid5f", 00:16:00.730 "superblock": false, 00:16:00.730 "num_base_bdevs": 4, 00:16:00.730 "num_base_bdevs_discovered": 3, 00:16:00.730 "num_base_bdevs_operational": 3, 00:16:00.730 "base_bdevs_list": [ 00:16:00.730 { 00:16:00.730 "name": null, 00:16:00.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.730 "is_configured": false, 00:16:00.730 "data_offset": 0, 00:16:00.730 "data_size": 65536 00:16:00.730 }, 00:16:00.730 { 00:16:00.730 "name": "BaseBdev2", 00:16:00.731 "uuid": "b7ec91cf-8cf3-537a-b380-6936470bb6a5", 00:16:00.731 "is_configured": true, 00:16:00.731 "data_offset": 0, 00:16:00.731 "data_size": 65536 00:16:00.731 }, 00:16:00.731 { 00:16:00.731 "name": "BaseBdev3", 00:16:00.731 "uuid": "39423ab3-eba8-5724-ba3f-6f75129ff0b3", 00:16:00.731 "is_configured": true, 00:16:00.731 "data_offset": 0, 00:16:00.731 "data_size": 65536 00:16:00.731 }, 00:16:00.731 { 00:16:00.731 "name": "BaseBdev4", 00:16:00.731 "uuid": "80a5ac4c-0201-598a-bb7c-d6deec3d7276", 00:16:00.731 "is_configured": true, 00:16:00.731 "data_offset": 0, 00:16:00.731 "data_size": 65536 00:16:00.731 } 00:16:00.731 ] 00:16:00.731 }' 00:16:00.731 10:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.731 10:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:00.731 10:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.731 10:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:00.731 10:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:00.731 10:27:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.731 10:27:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.731 [2024-11-19 10:27:14.374427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:00.731 [2024-11-19 10:27:14.388369] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:16:00.731 10:27:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.731 10:27:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:00.731 [2024-11-19 10:27:14.397108] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:01.668 10:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.668 10:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.668 10:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.668 10:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.668 10:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.668 10:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.668 10:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.668 10:27:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.668 10:27:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.668 10:27:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.668 10:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.668 "name": "raid_bdev1", 00:16:01.668 "uuid": "7b4a15a4-40fb-4ece-898c-e529d28975e1", 00:16:01.668 "strip_size_kb": 64, 00:16:01.668 "state": "online", 00:16:01.668 "raid_level": "raid5f", 00:16:01.668 "superblock": false, 00:16:01.668 "num_base_bdevs": 4, 00:16:01.668 "num_base_bdevs_discovered": 4, 00:16:01.668 "num_base_bdevs_operational": 4, 00:16:01.668 "process": { 00:16:01.668 "type": "rebuild", 00:16:01.668 "target": "spare", 00:16:01.668 "progress": { 00:16:01.668 "blocks": 19200, 00:16:01.668 "percent": 9 00:16:01.668 } 00:16:01.668 }, 00:16:01.668 "base_bdevs_list": [ 00:16:01.668 { 00:16:01.668 "name": "spare", 00:16:01.668 "uuid": "444fa162-d481-5bd7-85bb-e0437017d251", 00:16:01.668 "is_configured": true, 00:16:01.668 "data_offset": 0, 00:16:01.668 "data_size": 65536 00:16:01.668 }, 00:16:01.668 { 00:16:01.668 "name": "BaseBdev2", 00:16:01.669 "uuid": "b7ec91cf-8cf3-537a-b380-6936470bb6a5", 00:16:01.669 "is_configured": true, 00:16:01.669 "data_offset": 0, 00:16:01.669 "data_size": 65536 00:16:01.669 }, 00:16:01.669 { 00:16:01.669 "name": "BaseBdev3", 00:16:01.669 "uuid": "39423ab3-eba8-5724-ba3f-6f75129ff0b3", 00:16:01.669 "is_configured": true, 00:16:01.669 "data_offset": 0, 00:16:01.669 "data_size": 65536 00:16:01.669 }, 00:16:01.669 { 00:16:01.669 "name": "BaseBdev4", 00:16:01.669 "uuid": "80a5ac4c-0201-598a-bb7c-d6deec3d7276", 00:16:01.669 "is_configured": true, 00:16:01.669 "data_offset": 0, 00:16:01.669 "data_size": 65536 00:16:01.669 } 00:16:01.669 ] 00:16:01.669 }' 00:16:01.928 10:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.928 10:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.928 10:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.928 10:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.928 10:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:01.928 10:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:01.928 10:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:01.928 10:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=600 00:16:01.928 10:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:01.928 10:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.928 10:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.928 10:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.928 10:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.928 10:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.928 10:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.928 10:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.928 10:27:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.928 10:27:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.928 10:27:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.928 10:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.928 "name": "raid_bdev1", 00:16:01.928 "uuid": "7b4a15a4-40fb-4ece-898c-e529d28975e1", 00:16:01.928 "strip_size_kb": 64, 00:16:01.928 "state": "online", 00:16:01.928 "raid_level": "raid5f", 00:16:01.929 "superblock": false, 00:16:01.929 "num_base_bdevs": 4, 00:16:01.929 "num_base_bdevs_discovered": 4, 00:16:01.929 "num_base_bdevs_operational": 4, 00:16:01.929 "process": { 00:16:01.929 "type": "rebuild", 00:16:01.929 "target": "spare", 00:16:01.929 "progress": { 00:16:01.929 "blocks": 21120, 00:16:01.929 "percent": 10 00:16:01.929 } 00:16:01.929 }, 00:16:01.929 "base_bdevs_list": [ 00:16:01.929 { 00:16:01.929 "name": "spare", 00:16:01.929 "uuid": "444fa162-d481-5bd7-85bb-e0437017d251", 00:16:01.929 "is_configured": true, 00:16:01.929 "data_offset": 0, 00:16:01.929 "data_size": 65536 00:16:01.929 }, 00:16:01.929 { 00:16:01.929 "name": "BaseBdev2", 00:16:01.929 "uuid": "b7ec91cf-8cf3-537a-b380-6936470bb6a5", 00:16:01.929 "is_configured": true, 00:16:01.929 "data_offset": 0, 00:16:01.929 "data_size": 65536 00:16:01.929 }, 00:16:01.929 { 00:16:01.929 "name": "BaseBdev3", 00:16:01.929 "uuid": "39423ab3-eba8-5724-ba3f-6f75129ff0b3", 00:16:01.929 "is_configured": true, 00:16:01.929 "data_offset": 0, 00:16:01.929 "data_size": 65536 00:16:01.929 }, 00:16:01.929 { 00:16:01.929 "name": "BaseBdev4", 00:16:01.929 "uuid": "80a5ac4c-0201-598a-bb7c-d6deec3d7276", 00:16:01.929 "is_configured": true, 00:16:01.929 "data_offset": 0, 00:16:01.929 "data_size": 65536 00:16:01.929 } 00:16:01.929 ] 00:16:01.929 }' 00:16:01.929 10:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.929 10:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.929 10:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.929 10:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.929 10:27:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:03.311 10:27:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:03.311 10:27:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:03.311 10:27:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:03.311 10:27:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:03.311 10:27:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:03.311 10:27:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:03.311 10:27:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.311 10:27:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.311 10:27:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.311 10:27:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.311 10:27:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.311 10:27:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:03.311 "name": "raid_bdev1", 00:16:03.311 "uuid": "7b4a15a4-40fb-4ece-898c-e529d28975e1", 00:16:03.311 "strip_size_kb": 64, 00:16:03.311 "state": "online", 00:16:03.311 "raid_level": "raid5f", 00:16:03.311 "superblock": false, 00:16:03.311 "num_base_bdevs": 4, 00:16:03.311 "num_base_bdevs_discovered": 4, 00:16:03.311 "num_base_bdevs_operational": 4, 00:16:03.311 "process": { 00:16:03.311 "type": "rebuild", 00:16:03.311 "target": "spare", 00:16:03.311 "progress": { 00:16:03.311 "blocks": 42240, 00:16:03.311 "percent": 21 00:16:03.311 } 00:16:03.311 }, 00:16:03.311 "base_bdevs_list": [ 00:16:03.311 { 00:16:03.311 "name": "spare", 00:16:03.311 "uuid": "444fa162-d481-5bd7-85bb-e0437017d251", 00:16:03.311 "is_configured": true, 00:16:03.311 "data_offset": 0, 00:16:03.311 "data_size": 65536 00:16:03.311 }, 00:16:03.311 { 00:16:03.311 "name": "BaseBdev2", 00:16:03.311 "uuid": "b7ec91cf-8cf3-537a-b380-6936470bb6a5", 00:16:03.311 "is_configured": true, 00:16:03.311 "data_offset": 0, 00:16:03.311 "data_size": 65536 00:16:03.311 }, 00:16:03.311 { 00:16:03.311 "name": "BaseBdev3", 00:16:03.311 "uuid": "39423ab3-eba8-5724-ba3f-6f75129ff0b3", 00:16:03.311 "is_configured": true, 00:16:03.311 "data_offset": 0, 00:16:03.311 "data_size": 65536 00:16:03.311 }, 00:16:03.311 { 00:16:03.311 "name": "BaseBdev4", 00:16:03.311 "uuid": "80a5ac4c-0201-598a-bb7c-d6deec3d7276", 00:16:03.311 "is_configured": true, 00:16:03.311 "data_offset": 0, 00:16:03.311 "data_size": 65536 00:16:03.311 } 00:16:03.311 ] 00:16:03.311 }' 00:16:03.311 10:27:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.311 10:27:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.311 10:27:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.311 10:27:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.311 10:27:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:04.251 10:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:04.251 10:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.251 10:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.251 10:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.251 10:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.251 10:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.251 10:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.251 10:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.251 10:27:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.251 10:27:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.251 10:27:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.251 10:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.251 "name": "raid_bdev1", 00:16:04.251 "uuid": "7b4a15a4-40fb-4ece-898c-e529d28975e1", 00:16:04.251 "strip_size_kb": 64, 00:16:04.251 "state": "online", 00:16:04.251 "raid_level": "raid5f", 00:16:04.251 "superblock": false, 00:16:04.251 "num_base_bdevs": 4, 00:16:04.251 "num_base_bdevs_discovered": 4, 00:16:04.251 "num_base_bdevs_operational": 4, 00:16:04.251 "process": { 00:16:04.251 "type": "rebuild", 00:16:04.251 "target": "spare", 00:16:04.251 "progress": { 00:16:04.251 "blocks": 65280, 00:16:04.251 "percent": 33 00:16:04.251 } 00:16:04.251 }, 00:16:04.251 "base_bdevs_list": [ 00:16:04.251 { 00:16:04.251 "name": "spare", 00:16:04.251 "uuid": "444fa162-d481-5bd7-85bb-e0437017d251", 00:16:04.251 "is_configured": true, 00:16:04.251 "data_offset": 0, 00:16:04.251 "data_size": 65536 00:16:04.251 }, 00:16:04.251 { 00:16:04.251 "name": "BaseBdev2", 00:16:04.251 "uuid": "b7ec91cf-8cf3-537a-b380-6936470bb6a5", 00:16:04.251 "is_configured": true, 00:16:04.251 "data_offset": 0, 00:16:04.251 "data_size": 65536 00:16:04.251 }, 00:16:04.251 { 00:16:04.251 "name": "BaseBdev3", 00:16:04.251 "uuid": "39423ab3-eba8-5724-ba3f-6f75129ff0b3", 00:16:04.251 "is_configured": true, 00:16:04.251 "data_offset": 0, 00:16:04.251 "data_size": 65536 00:16:04.251 }, 00:16:04.251 { 00:16:04.252 "name": "BaseBdev4", 00:16:04.252 "uuid": "80a5ac4c-0201-598a-bb7c-d6deec3d7276", 00:16:04.252 "is_configured": true, 00:16:04.252 "data_offset": 0, 00:16:04.252 "data_size": 65536 00:16:04.252 } 00:16:04.252 ] 00:16:04.252 }' 00:16:04.252 10:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.252 10:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:04.252 10:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.252 10:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:04.252 10:27:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:05.192 10:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:05.192 10:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:05.192 10:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:05.192 10:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:05.192 10:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:05.192 10:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:05.192 10:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.192 10:27:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.192 10:27:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.192 10:27:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.452 10:27:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.452 10:27:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:05.452 "name": "raid_bdev1", 00:16:05.452 "uuid": "7b4a15a4-40fb-4ece-898c-e529d28975e1", 00:16:05.452 "strip_size_kb": 64, 00:16:05.452 "state": "online", 00:16:05.452 "raid_level": "raid5f", 00:16:05.452 "superblock": false, 00:16:05.452 "num_base_bdevs": 4, 00:16:05.452 "num_base_bdevs_discovered": 4, 00:16:05.452 "num_base_bdevs_operational": 4, 00:16:05.452 "process": { 00:16:05.452 "type": "rebuild", 00:16:05.452 "target": "spare", 00:16:05.452 "progress": { 00:16:05.452 "blocks": 86400, 00:16:05.452 "percent": 43 00:16:05.452 } 00:16:05.452 }, 00:16:05.452 "base_bdevs_list": [ 00:16:05.452 { 00:16:05.452 "name": "spare", 00:16:05.452 "uuid": "444fa162-d481-5bd7-85bb-e0437017d251", 00:16:05.452 "is_configured": true, 00:16:05.452 "data_offset": 0, 00:16:05.452 "data_size": 65536 00:16:05.452 }, 00:16:05.452 { 00:16:05.452 "name": "BaseBdev2", 00:16:05.452 "uuid": "b7ec91cf-8cf3-537a-b380-6936470bb6a5", 00:16:05.452 "is_configured": true, 00:16:05.452 "data_offset": 0, 00:16:05.452 "data_size": 65536 00:16:05.452 }, 00:16:05.452 { 00:16:05.452 "name": "BaseBdev3", 00:16:05.452 "uuid": "39423ab3-eba8-5724-ba3f-6f75129ff0b3", 00:16:05.452 "is_configured": true, 00:16:05.452 "data_offset": 0, 00:16:05.452 "data_size": 65536 00:16:05.452 }, 00:16:05.452 { 00:16:05.452 "name": "BaseBdev4", 00:16:05.452 "uuid": "80a5ac4c-0201-598a-bb7c-d6deec3d7276", 00:16:05.452 "is_configured": true, 00:16:05.452 "data_offset": 0, 00:16:05.452 "data_size": 65536 00:16:05.452 } 00:16:05.452 ] 00:16:05.452 }' 00:16:05.452 10:27:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:05.452 10:27:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:05.452 10:27:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:05.452 10:27:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:05.452 10:27:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:06.392 10:27:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:06.392 10:27:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:06.392 10:27:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.392 10:27:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:06.392 10:27:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:06.392 10:27:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.392 10:27:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.392 10:27:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.392 10:27:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.392 10:27:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.392 10:27:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.392 10:27:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.392 "name": "raid_bdev1", 00:16:06.392 "uuid": "7b4a15a4-40fb-4ece-898c-e529d28975e1", 00:16:06.392 "strip_size_kb": 64, 00:16:06.392 "state": "online", 00:16:06.392 "raid_level": "raid5f", 00:16:06.392 "superblock": false, 00:16:06.392 "num_base_bdevs": 4, 00:16:06.392 "num_base_bdevs_discovered": 4, 00:16:06.392 "num_base_bdevs_operational": 4, 00:16:06.392 "process": { 00:16:06.392 "type": "rebuild", 00:16:06.392 "target": "spare", 00:16:06.392 "progress": { 00:16:06.392 "blocks": 107520, 00:16:06.392 "percent": 54 00:16:06.392 } 00:16:06.392 }, 00:16:06.392 "base_bdevs_list": [ 00:16:06.392 { 00:16:06.392 "name": "spare", 00:16:06.392 "uuid": "444fa162-d481-5bd7-85bb-e0437017d251", 00:16:06.392 "is_configured": true, 00:16:06.392 "data_offset": 0, 00:16:06.392 "data_size": 65536 00:16:06.392 }, 00:16:06.392 { 00:16:06.392 "name": "BaseBdev2", 00:16:06.392 "uuid": "b7ec91cf-8cf3-537a-b380-6936470bb6a5", 00:16:06.392 "is_configured": true, 00:16:06.392 "data_offset": 0, 00:16:06.392 "data_size": 65536 00:16:06.392 }, 00:16:06.392 { 00:16:06.392 "name": "BaseBdev3", 00:16:06.392 "uuid": "39423ab3-eba8-5724-ba3f-6f75129ff0b3", 00:16:06.392 "is_configured": true, 00:16:06.392 "data_offset": 0, 00:16:06.392 "data_size": 65536 00:16:06.392 }, 00:16:06.392 { 00:16:06.392 "name": "BaseBdev4", 00:16:06.392 "uuid": "80a5ac4c-0201-598a-bb7c-d6deec3d7276", 00:16:06.392 "is_configured": true, 00:16:06.392 "data_offset": 0, 00:16:06.392 "data_size": 65536 00:16:06.392 } 00:16:06.392 ] 00:16:06.392 }' 00:16:06.392 10:27:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.652 10:27:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:06.652 10:27:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.652 10:27:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:06.652 10:27:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:07.603 10:27:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:07.603 10:27:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:07.603 10:27:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.603 10:27:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:07.603 10:27:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:07.603 10:27:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.603 10:27:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.603 10:27:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.603 10:27:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.603 10:27:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.603 10:27:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.603 10:27:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.603 "name": "raid_bdev1", 00:16:07.603 "uuid": "7b4a15a4-40fb-4ece-898c-e529d28975e1", 00:16:07.603 "strip_size_kb": 64, 00:16:07.603 "state": "online", 00:16:07.603 "raid_level": "raid5f", 00:16:07.603 "superblock": false, 00:16:07.603 "num_base_bdevs": 4, 00:16:07.603 "num_base_bdevs_discovered": 4, 00:16:07.603 "num_base_bdevs_operational": 4, 00:16:07.603 "process": { 00:16:07.603 "type": "rebuild", 00:16:07.603 "target": "spare", 00:16:07.603 "progress": { 00:16:07.603 "blocks": 130560, 00:16:07.603 "percent": 66 00:16:07.603 } 00:16:07.603 }, 00:16:07.603 "base_bdevs_list": [ 00:16:07.603 { 00:16:07.603 "name": "spare", 00:16:07.603 "uuid": "444fa162-d481-5bd7-85bb-e0437017d251", 00:16:07.603 "is_configured": true, 00:16:07.603 "data_offset": 0, 00:16:07.603 "data_size": 65536 00:16:07.603 }, 00:16:07.603 { 00:16:07.603 "name": "BaseBdev2", 00:16:07.603 "uuid": "b7ec91cf-8cf3-537a-b380-6936470bb6a5", 00:16:07.603 "is_configured": true, 00:16:07.603 "data_offset": 0, 00:16:07.603 "data_size": 65536 00:16:07.603 }, 00:16:07.603 { 00:16:07.603 "name": "BaseBdev3", 00:16:07.603 "uuid": "39423ab3-eba8-5724-ba3f-6f75129ff0b3", 00:16:07.603 "is_configured": true, 00:16:07.603 "data_offset": 0, 00:16:07.603 "data_size": 65536 00:16:07.603 }, 00:16:07.603 { 00:16:07.603 "name": "BaseBdev4", 00:16:07.603 "uuid": "80a5ac4c-0201-598a-bb7c-d6deec3d7276", 00:16:07.603 "is_configured": true, 00:16:07.603 "data_offset": 0, 00:16:07.603 "data_size": 65536 00:16:07.603 } 00:16:07.603 ] 00:16:07.603 }' 00:16:07.603 10:27:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.603 10:27:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:07.603 10:27:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.603 10:27:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.603 10:27:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:08.984 10:27:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:08.984 10:27:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.984 10:27:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.984 10:27:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.984 10:27:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.984 10:27:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.984 10:27:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.984 10:27:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.984 10:27:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.984 10:27:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.984 10:27:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.984 10:27:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.984 "name": "raid_bdev1", 00:16:08.984 "uuid": "7b4a15a4-40fb-4ece-898c-e529d28975e1", 00:16:08.984 "strip_size_kb": 64, 00:16:08.984 "state": "online", 00:16:08.984 "raid_level": "raid5f", 00:16:08.984 "superblock": false, 00:16:08.984 "num_base_bdevs": 4, 00:16:08.984 "num_base_bdevs_discovered": 4, 00:16:08.984 "num_base_bdevs_operational": 4, 00:16:08.984 "process": { 00:16:08.984 "type": "rebuild", 00:16:08.984 "target": "spare", 00:16:08.984 "progress": { 00:16:08.984 "blocks": 151680, 00:16:08.984 "percent": 77 00:16:08.984 } 00:16:08.984 }, 00:16:08.984 "base_bdevs_list": [ 00:16:08.984 { 00:16:08.984 "name": "spare", 00:16:08.984 "uuid": "444fa162-d481-5bd7-85bb-e0437017d251", 00:16:08.984 "is_configured": true, 00:16:08.984 "data_offset": 0, 00:16:08.984 "data_size": 65536 00:16:08.984 }, 00:16:08.984 { 00:16:08.984 "name": "BaseBdev2", 00:16:08.984 "uuid": "b7ec91cf-8cf3-537a-b380-6936470bb6a5", 00:16:08.984 "is_configured": true, 00:16:08.984 "data_offset": 0, 00:16:08.984 "data_size": 65536 00:16:08.984 }, 00:16:08.984 { 00:16:08.984 "name": "BaseBdev3", 00:16:08.984 "uuid": "39423ab3-eba8-5724-ba3f-6f75129ff0b3", 00:16:08.984 "is_configured": true, 00:16:08.984 "data_offset": 0, 00:16:08.984 "data_size": 65536 00:16:08.984 }, 00:16:08.984 { 00:16:08.984 "name": "BaseBdev4", 00:16:08.984 "uuid": "80a5ac4c-0201-598a-bb7c-d6deec3d7276", 00:16:08.984 "is_configured": true, 00:16:08.984 "data_offset": 0, 00:16:08.984 "data_size": 65536 00:16:08.984 } 00:16:08.984 ] 00:16:08.984 }' 00:16:08.984 10:27:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.985 10:27:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.985 10:27:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.985 10:27:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.985 10:27:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:09.924 10:27:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:09.924 10:27:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:09.924 10:27:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:09.924 10:27:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:09.924 10:27:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:09.924 10:27:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:09.924 10:27:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.924 10:27:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:09.924 10:27:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.924 10:27:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.924 10:27:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.924 10:27:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:09.924 "name": "raid_bdev1", 00:16:09.924 "uuid": "7b4a15a4-40fb-4ece-898c-e529d28975e1", 00:16:09.924 "strip_size_kb": 64, 00:16:09.924 "state": "online", 00:16:09.924 "raid_level": "raid5f", 00:16:09.924 "superblock": false, 00:16:09.924 "num_base_bdevs": 4, 00:16:09.924 "num_base_bdevs_discovered": 4, 00:16:09.924 "num_base_bdevs_operational": 4, 00:16:09.924 "process": { 00:16:09.924 "type": "rebuild", 00:16:09.924 "target": "spare", 00:16:09.924 "progress": { 00:16:09.924 "blocks": 172800, 00:16:09.924 "percent": 87 00:16:09.924 } 00:16:09.924 }, 00:16:09.924 "base_bdevs_list": [ 00:16:09.924 { 00:16:09.924 "name": "spare", 00:16:09.924 "uuid": "444fa162-d481-5bd7-85bb-e0437017d251", 00:16:09.924 "is_configured": true, 00:16:09.924 "data_offset": 0, 00:16:09.924 "data_size": 65536 00:16:09.924 }, 00:16:09.924 { 00:16:09.924 "name": "BaseBdev2", 00:16:09.924 "uuid": "b7ec91cf-8cf3-537a-b380-6936470bb6a5", 00:16:09.924 "is_configured": true, 00:16:09.924 "data_offset": 0, 00:16:09.924 "data_size": 65536 00:16:09.924 }, 00:16:09.924 { 00:16:09.924 "name": "BaseBdev3", 00:16:09.924 "uuid": "39423ab3-eba8-5724-ba3f-6f75129ff0b3", 00:16:09.924 "is_configured": true, 00:16:09.924 "data_offset": 0, 00:16:09.924 "data_size": 65536 00:16:09.924 }, 00:16:09.924 { 00:16:09.924 "name": "BaseBdev4", 00:16:09.924 "uuid": "80a5ac4c-0201-598a-bb7c-d6deec3d7276", 00:16:09.924 "is_configured": true, 00:16:09.924 "data_offset": 0, 00:16:09.924 "data_size": 65536 00:16:09.924 } 00:16:09.924 ] 00:16:09.924 }' 00:16:09.924 10:27:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:09.924 10:27:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:09.924 10:27:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:09.924 10:27:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:09.924 10:27:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:11.307 10:27:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:11.307 10:27:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.307 10:27:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.307 10:27:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.307 10:27:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.307 10:27:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.307 10:27:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.307 10:27:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.307 10:27:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.307 10:27:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.307 10:27:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.307 10:27:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.307 "name": "raid_bdev1", 00:16:11.307 "uuid": "7b4a15a4-40fb-4ece-898c-e529d28975e1", 00:16:11.307 "strip_size_kb": 64, 00:16:11.307 "state": "online", 00:16:11.307 "raid_level": "raid5f", 00:16:11.307 "superblock": false, 00:16:11.307 "num_base_bdevs": 4, 00:16:11.307 "num_base_bdevs_discovered": 4, 00:16:11.307 "num_base_bdevs_operational": 4, 00:16:11.307 "process": { 00:16:11.307 "type": "rebuild", 00:16:11.307 "target": "spare", 00:16:11.307 "progress": { 00:16:11.307 "blocks": 195840, 00:16:11.307 "percent": 99 00:16:11.307 } 00:16:11.307 }, 00:16:11.307 "base_bdevs_list": [ 00:16:11.307 { 00:16:11.307 "name": "spare", 00:16:11.307 "uuid": "444fa162-d481-5bd7-85bb-e0437017d251", 00:16:11.307 "is_configured": true, 00:16:11.307 "data_offset": 0, 00:16:11.307 "data_size": 65536 00:16:11.307 }, 00:16:11.307 { 00:16:11.307 "name": "BaseBdev2", 00:16:11.307 "uuid": "b7ec91cf-8cf3-537a-b380-6936470bb6a5", 00:16:11.307 "is_configured": true, 00:16:11.307 "data_offset": 0, 00:16:11.307 "data_size": 65536 00:16:11.307 }, 00:16:11.307 { 00:16:11.307 "name": "BaseBdev3", 00:16:11.307 "uuid": "39423ab3-eba8-5724-ba3f-6f75129ff0b3", 00:16:11.307 "is_configured": true, 00:16:11.307 "data_offset": 0, 00:16:11.307 "data_size": 65536 00:16:11.307 }, 00:16:11.307 { 00:16:11.307 "name": "BaseBdev4", 00:16:11.307 "uuid": "80a5ac4c-0201-598a-bb7c-d6deec3d7276", 00:16:11.307 "is_configured": true, 00:16:11.307 "data_offset": 0, 00:16:11.307 "data_size": 65536 00:16:11.307 } 00:16:11.307 ] 00:16:11.307 }' 00:16:11.307 10:27:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.307 [2024-11-19 10:27:24.737872] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:11.307 [2024-11-19 10:27:24.737933] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:11.307 [2024-11-19 10:27:24.737973] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.307 10:27:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.307 10:27:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.307 10:27:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.307 10:27:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:12.247 10:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:12.247 10:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.247 10:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.247 10:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.247 10:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.247 10:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.247 10:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.247 10:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.247 10:27:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.247 10:27:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.247 10:27:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.247 10:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.247 "name": "raid_bdev1", 00:16:12.247 "uuid": "7b4a15a4-40fb-4ece-898c-e529d28975e1", 00:16:12.247 "strip_size_kb": 64, 00:16:12.247 "state": "online", 00:16:12.247 "raid_level": "raid5f", 00:16:12.247 "superblock": false, 00:16:12.247 "num_base_bdevs": 4, 00:16:12.247 "num_base_bdevs_discovered": 4, 00:16:12.247 "num_base_bdevs_operational": 4, 00:16:12.247 "base_bdevs_list": [ 00:16:12.247 { 00:16:12.247 "name": "spare", 00:16:12.247 "uuid": "444fa162-d481-5bd7-85bb-e0437017d251", 00:16:12.247 "is_configured": true, 00:16:12.247 "data_offset": 0, 00:16:12.247 "data_size": 65536 00:16:12.247 }, 00:16:12.247 { 00:16:12.247 "name": "BaseBdev2", 00:16:12.247 "uuid": "b7ec91cf-8cf3-537a-b380-6936470bb6a5", 00:16:12.247 "is_configured": true, 00:16:12.247 "data_offset": 0, 00:16:12.247 "data_size": 65536 00:16:12.247 }, 00:16:12.247 { 00:16:12.247 "name": "BaseBdev3", 00:16:12.247 "uuid": "39423ab3-eba8-5724-ba3f-6f75129ff0b3", 00:16:12.247 "is_configured": true, 00:16:12.247 "data_offset": 0, 00:16:12.247 "data_size": 65536 00:16:12.247 }, 00:16:12.247 { 00:16:12.247 "name": "BaseBdev4", 00:16:12.247 "uuid": "80a5ac4c-0201-598a-bb7c-d6deec3d7276", 00:16:12.247 "is_configured": true, 00:16:12.247 "data_offset": 0, 00:16:12.247 "data_size": 65536 00:16:12.247 } 00:16:12.247 ] 00:16:12.248 }' 00:16:12.248 10:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.248 10:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:12.248 10:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.248 10:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:12.248 10:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:12.248 10:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:12.248 10:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.248 10:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:12.248 10:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:12.248 10:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.248 10:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.248 10:27:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.248 10:27:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.248 10:27:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.248 10:27:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.248 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.248 "name": "raid_bdev1", 00:16:12.248 "uuid": "7b4a15a4-40fb-4ece-898c-e529d28975e1", 00:16:12.248 "strip_size_kb": 64, 00:16:12.248 "state": "online", 00:16:12.248 "raid_level": "raid5f", 00:16:12.248 "superblock": false, 00:16:12.248 "num_base_bdevs": 4, 00:16:12.248 "num_base_bdevs_discovered": 4, 00:16:12.248 "num_base_bdevs_operational": 4, 00:16:12.248 "base_bdevs_list": [ 00:16:12.248 { 00:16:12.248 "name": "spare", 00:16:12.248 "uuid": "444fa162-d481-5bd7-85bb-e0437017d251", 00:16:12.248 "is_configured": true, 00:16:12.248 "data_offset": 0, 00:16:12.248 "data_size": 65536 00:16:12.248 }, 00:16:12.248 { 00:16:12.248 "name": "BaseBdev2", 00:16:12.248 "uuid": "b7ec91cf-8cf3-537a-b380-6936470bb6a5", 00:16:12.248 "is_configured": true, 00:16:12.248 "data_offset": 0, 00:16:12.248 "data_size": 65536 00:16:12.248 }, 00:16:12.248 { 00:16:12.248 "name": "BaseBdev3", 00:16:12.248 "uuid": "39423ab3-eba8-5724-ba3f-6f75129ff0b3", 00:16:12.248 "is_configured": true, 00:16:12.248 "data_offset": 0, 00:16:12.248 "data_size": 65536 00:16:12.248 }, 00:16:12.248 { 00:16:12.248 "name": "BaseBdev4", 00:16:12.248 "uuid": "80a5ac4c-0201-598a-bb7c-d6deec3d7276", 00:16:12.248 "is_configured": true, 00:16:12.248 "data_offset": 0, 00:16:12.248 "data_size": 65536 00:16:12.248 } 00:16:12.248 ] 00:16:12.248 }' 00:16:12.248 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.508 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:12.508 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.508 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:12.508 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:12.508 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.508 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.508 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.508 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.508 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:12.508 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.508 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.508 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.508 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.508 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.508 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.508 10:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.508 10:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.508 10:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.508 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.509 "name": "raid_bdev1", 00:16:12.509 "uuid": "7b4a15a4-40fb-4ece-898c-e529d28975e1", 00:16:12.509 "strip_size_kb": 64, 00:16:12.509 "state": "online", 00:16:12.509 "raid_level": "raid5f", 00:16:12.509 "superblock": false, 00:16:12.509 "num_base_bdevs": 4, 00:16:12.509 "num_base_bdevs_discovered": 4, 00:16:12.509 "num_base_bdevs_operational": 4, 00:16:12.509 "base_bdevs_list": [ 00:16:12.509 { 00:16:12.509 "name": "spare", 00:16:12.509 "uuid": "444fa162-d481-5bd7-85bb-e0437017d251", 00:16:12.509 "is_configured": true, 00:16:12.509 "data_offset": 0, 00:16:12.509 "data_size": 65536 00:16:12.509 }, 00:16:12.509 { 00:16:12.509 "name": "BaseBdev2", 00:16:12.509 "uuid": "b7ec91cf-8cf3-537a-b380-6936470bb6a5", 00:16:12.509 "is_configured": true, 00:16:12.509 "data_offset": 0, 00:16:12.509 "data_size": 65536 00:16:12.509 }, 00:16:12.509 { 00:16:12.509 "name": "BaseBdev3", 00:16:12.509 "uuid": "39423ab3-eba8-5724-ba3f-6f75129ff0b3", 00:16:12.509 "is_configured": true, 00:16:12.509 "data_offset": 0, 00:16:12.509 "data_size": 65536 00:16:12.509 }, 00:16:12.509 { 00:16:12.509 "name": "BaseBdev4", 00:16:12.509 "uuid": "80a5ac4c-0201-598a-bb7c-d6deec3d7276", 00:16:12.509 "is_configured": true, 00:16:12.509 "data_offset": 0, 00:16:12.509 "data_size": 65536 00:16:12.509 } 00:16:12.509 ] 00:16:12.509 }' 00:16:12.509 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.509 10:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.079 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:13.079 10:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.079 10:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.079 [2024-11-19 10:27:26.575720] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:13.079 [2024-11-19 10:27:26.575749] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:13.079 [2024-11-19 10:27:26.575826] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:13.079 [2024-11-19 10:27:26.575916] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:13.079 [2024-11-19 10:27:26.575925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:13.079 10:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.079 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:13.079 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.079 10:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.079 10:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.079 10:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.079 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:13.079 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:13.079 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:13.079 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:13.080 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:13.080 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:13.080 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:13.080 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:13.080 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:13.080 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:13.080 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:13.080 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:13.080 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:13.080 /dev/nbd0 00:16:13.080 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:13.080 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:13.080 10:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:13.080 10:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:13.080 10:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:13.080 10:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:13.080 10:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:13.080 10:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:13.080 10:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:13.080 10:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:13.080 10:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:13.080 1+0 records in 00:16:13.080 1+0 records out 00:16:13.080 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228776 s, 17.9 MB/s 00:16:13.340 10:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:13.340 10:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:13.340 10:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:13.340 10:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:13.340 10:27:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:13.340 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:13.340 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:13.340 10:27:26 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:13.340 /dev/nbd1 00:16:13.340 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:13.340 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:13.340 10:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:13.340 10:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:13.340 10:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:13.340 10:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:13.340 10:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:13.340 10:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:13.340 10:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:13.340 10:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:13.340 10:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:13.340 1+0 records in 00:16:13.340 1+0 records out 00:16:13.340 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413403 s, 9.9 MB/s 00:16:13.340 10:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:13.340 10:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:13.340 10:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:13.340 10:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:13.340 10:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:13.340 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:13.340 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:13.340 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:13.600 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:13.600 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:13.600 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:13.600 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:13.600 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:13.600 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:13.600 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:13.860 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:13.860 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:13.860 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:13.860 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:13.860 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:13.860 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:13.860 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:13.860 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:13.860 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:13.860 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:14.120 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:14.120 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:14.120 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:14.120 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:14.120 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:14.120 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:14.120 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:14.120 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:14.120 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:14.120 10:27:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84269 00:16:14.120 10:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84269 ']' 00:16:14.120 10:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84269 00:16:14.120 10:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:14.120 10:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:14.120 10:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84269 00:16:14.120 10:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:14.120 10:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:14.120 killing process with pid 84269 00:16:14.120 10:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84269' 00:16:14.120 10:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84269 00:16:14.120 Received shutdown signal, test time was about 60.000000 seconds 00:16:14.120 00:16:14.120 Latency(us) 00:16:14.120 [2024-11-19T10:27:27.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.120 [2024-11-19T10:27:27.901Z] =================================================================================================================== 00:16:14.120 [2024-11-19T10:27:27.901Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:14.120 [2024-11-19 10:27:27.727324] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:14.120 10:27:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84269 00:16:14.691 [2024-11-19 10:27:28.177691] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:15.632 00:16:15.632 real 0m19.771s 00:16:15.632 user 0m23.660s 00:16:15.632 sys 0m2.178s 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.632 ************************************ 00:16:15.632 END TEST raid5f_rebuild_test 00:16:15.632 ************************************ 00:16:15.632 10:27:29 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:16:15.632 10:27:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:15.632 10:27:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:15.632 10:27:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:15.632 ************************************ 00:16:15.632 START TEST raid5f_rebuild_test_sb 00:16:15.632 ************************************ 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=84787 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 84787 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84787 ']' 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:15.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:15.632 10:27:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.632 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:15.632 Zero copy mechanism will not be used. 00:16:15.632 [2024-11-19 10:27:29.366592] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:16:15.632 [2024-11-19 10:27:29.366698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84787 ] 00:16:15.892 [2024-11-19 10:27:29.537137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.892 [2024-11-19 10:27:29.634255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.153 [2024-11-19 10:27:29.824569] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.153 [2024-11-19 10:27:29.824622] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:16.413 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:16.413 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:16.413 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:16.413 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:16.413 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.413 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.673 BaseBdev1_malloc 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.673 [2024-11-19 10:27:30.227056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:16.673 [2024-11-19 10:27:30.227117] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.673 [2024-11-19 10:27:30.227140] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:16.673 [2024-11-19 10:27:30.227150] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.673 [2024-11-19 10:27:30.229134] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.673 [2024-11-19 10:27:30.229171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:16.673 BaseBdev1 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.673 BaseBdev2_malloc 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.673 [2024-11-19 10:27:30.276688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:16.673 [2024-11-19 10:27:30.276741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.673 [2024-11-19 10:27:30.276757] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:16.673 [2024-11-19 10:27:30.276768] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.673 [2024-11-19 10:27:30.278736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.673 [2024-11-19 10:27:30.278772] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:16.673 BaseBdev2 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.673 BaseBdev3_malloc 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.673 [2024-11-19 10:27:30.364141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:16.673 [2024-11-19 10:27:30.364192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.673 [2024-11-19 10:27:30.364211] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:16.673 [2024-11-19 10:27:30.364222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.673 [2024-11-19 10:27:30.366145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.673 [2024-11-19 10:27:30.366183] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:16.673 BaseBdev3 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.673 BaseBdev4_malloc 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.673 [2024-11-19 10:27:30.417544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:16.673 [2024-11-19 10:27:30.417592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.673 [2024-11-19 10:27:30.417609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:16.673 [2024-11-19 10:27:30.417619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.673 [2024-11-19 10:27:30.419595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.673 [2024-11-19 10:27:30.419636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:16.673 BaseBdev4 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.673 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.933 spare_malloc 00:16:16.933 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.933 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:16.933 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.933 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.933 spare_delay 00:16:16.933 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.933 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:16.933 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.933 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.933 [2024-11-19 10:27:30.481407] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:16.933 [2024-11-19 10:27:30.481461] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.933 [2024-11-19 10:27:30.481480] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:16.933 [2024-11-19 10:27:30.481490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.933 [2024-11-19 10:27:30.483494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.934 [2024-11-19 10:27:30.483532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:16.934 spare 00:16:16.934 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.934 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:16.934 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.934 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.934 [2024-11-19 10:27:30.493434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:16.934 [2024-11-19 10:27:30.495168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:16.934 [2024-11-19 10:27:30.495229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:16.934 [2024-11-19 10:27:30.495277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:16.934 [2024-11-19 10:27:30.495474] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:16.934 [2024-11-19 10:27:30.495497] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:16.934 [2024-11-19 10:27:30.495717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:16.934 [2024-11-19 10:27:30.502373] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:16.934 [2024-11-19 10:27:30.502395] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:16.934 [2024-11-19 10:27:30.502553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.934 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.934 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:16.934 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:16.934 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.934 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.934 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.934 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:16.934 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.934 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.934 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.934 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.934 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.934 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.934 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.934 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.934 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.934 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.934 "name": "raid_bdev1", 00:16:16.934 "uuid": "d86f20c8-f9df-4bbf-bc54-f76da2569e4c", 00:16:16.934 "strip_size_kb": 64, 00:16:16.934 "state": "online", 00:16:16.934 "raid_level": "raid5f", 00:16:16.934 "superblock": true, 00:16:16.934 "num_base_bdevs": 4, 00:16:16.934 "num_base_bdevs_discovered": 4, 00:16:16.934 "num_base_bdevs_operational": 4, 00:16:16.934 "base_bdevs_list": [ 00:16:16.934 { 00:16:16.934 "name": "BaseBdev1", 00:16:16.934 "uuid": "096cdb1e-9f4f-5aca-bfb5-46639bd1dba3", 00:16:16.934 "is_configured": true, 00:16:16.934 "data_offset": 2048, 00:16:16.934 "data_size": 63488 00:16:16.934 }, 00:16:16.934 { 00:16:16.934 "name": "BaseBdev2", 00:16:16.934 "uuid": "d4714b06-0cdb-504e-b57a-3f0823cc3744", 00:16:16.934 "is_configured": true, 00:16:16.934 "data_offset": 2048, 00:16:16.934 "data_size": 63488 00:16:16.934 }, 00:16:16.934 { 00:16:16.934 "name": "BaseBdev3", 00:16:16.934 "uuid": "d96f8ef6-e555-560b-9b1a-a7e9b9294f0f", 00:16:16.934 "is_configured": true, 00:16:16.934 "data_offset": 2048, 00:16:16.934 "data_size": 63488 00:16:16.934 }, 00:16:16.934 { 00:16:16.934 "name": "BaseBdev4", 00:16:16.934 "uuid": "a47ab298-7424-5c42-aba3-cfa009801b4d", 00:16:16.934 "is_configured": true, 00:16:16.934 "data_offset": 2048, 00:16:16.934 "data_size": 63488 00:16:16.934 } 00:16:16.934 ] 00:16:16.934 }' 00:16:16.934 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.934 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.194 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:17.194 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:17.194 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.194 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.194 [2024-11-19 10:27:30.938316] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:17.194 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.194 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:16:17.194 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.194 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.194 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.194 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:17.454 10:27:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.454 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:17.454 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:17.454 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:17.454 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:17.454 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:17.454 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:17.454 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:17.454 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:17.454 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:17.454 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:17.454 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:17.454 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:17.454 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:17.454 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:17.454 [2024-11-19 10:27:31.193708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:17.454 /dev/nbd0 00:16:17.454 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:17.715 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:17.715 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:17.715 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:17.715 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:17.715 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:17.715 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:17.715 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:17.715 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:17.715 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:17.715 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:17.715 1+0 records in 00:16:17.715 1+0 records out 00:16:17.715 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336796 s, 12.2 MB/s 00:16:17.715 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:17.715 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:17.715 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:17.715 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:17.715 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:17.715 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:17.715 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:17.715 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:17.715 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:17.715 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:17.715 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:16:17.975 496+0 records in 00:16:17.975 496+0 records out 00:16:17.975 97517568 bytes (98 MB, 93 MiB) copied, 0.430701 s, 226 MB/s 00:16:17.975 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:17.975 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:17.975 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:17.975 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:17.975 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:17.975 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:17.975 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:18.235 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:18.235 [2024-11-19 10:27:31.908153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:18.235 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:18.235 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:18.235 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:18.235 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:18.235 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:18.235 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:18.235 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:18.235 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:18.235 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.235 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.235 [2024-11-19 10:27:31.929730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:18.235 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.235 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:18.235 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.235 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.235 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.235 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.235 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:18.236 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.236 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.236 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.236 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.236 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.236 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.236 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.236 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.236 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.236 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.236 "name": "raid_bdev1", 00:16:18.236 "uuid": "d86f20c8-f9df-4bbf-bc54-f76da2569e4c", 00:16:18.236 "strip_size_kb": 64, 00:16:18.236 "state": "online", 00:16:18.236 "raid_level": "raid5f", 00:16:18.236 "superblock": true, 00:16:18.236 "num_base_bdevs": 4, 00:16:18.236 "num_base_bdevs_discovered": 3, 00:16:18.236 "num_base_bdevs_operational": 3, 00:16:18.236 "base_bdevs_list": [ 00:16:18.236 { 00:16:18.236 "name": null, 00:16:18.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.236 "is_configured": false, 00:16:18.236 "data_offset": 0, 00:16:18.236 "data_size": 63488 00:16:18.236 }, 00:16:18.236 { 00:16:18.236 "name": "BaseBdev2", 00:16:18.236 "uuid": "d4714b06-0cdb-504e-b57a-3f0823cc3744", 00:16:18.236 "is_configured": true, 00:16:18.236 "data_offset": 2048, 00:16:18.236 "data_size": 63488 00:16:18.236 }, 00:16:18.236 { 00:16:18.236 "name": "BaseBdev3", 00:16:18.236 "uuid": "d96f8ef6-e555-560b-9b1a-a7e9b9294f0f", 00:16:18.236 "is_configured": true, 00:16:18.236 "data_offset": 2048, 00:16:18.236 "data_size": 63488 00:16:18.236 }, 00:16:18.236 { 00:16:18.236 "name": "BaseBdev4", 00:16:18.236 "uuid": "a47ab298-7424-5c42-aba3-cfa009801b4d", 00:16:18.236 "is_configured": true, 00:16:18.236 "data_offset": 2048, 00:16:18.236 "data_size": 63488 00:16:18.236 } 00:16:18.236 ] 00:16:18.236 }' 00:16:18.236 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.236 10:27:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.805 10:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:18.805 10:27:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.805 10:27:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.805 [2024-11-19 10:27:32.416885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:18.805 [2024-11-19 10:27:32.432691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:16:18.805 10:27:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.805 10:27:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:18.805 [2024-11-19 10:27:32.441911] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:19.745 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.745 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.745 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.745 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.745 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.745 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.745 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.745 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.745 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.745 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.745 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.745 "name": "raid_bdev1", 00:16:19.745 "uuid": "d86f20c8-f9df-4bbf-bc54-f76da2569e4c", 00:16:19.745 "strip_size_kb": 64, 00:16:19.745 "state": "online", 00:16:19.745 "raid_level": "raid5f", 00:16:19.745 "superblock": true, 00:16:19.745 "num_base_bdevs": 4, 00:16:19.745 "num_base_bdevs_discovered": 4, 00:16:19.745 "num_base_bdevs_operational": 4, 00:16:19.745 "process": { 00:16:19.745 "type": "rebuild", 00:16:19.745 "target": "spare", 00:16:19.745 "progress": { 00:16:19.745 "blocks": 19200, 00:16:19.745 "percent": 10 00:16:19.745 } 00:16:19.745 }, 00:16:19.745 "base_bdevs_list": [ 00:16:19.745 { 00:16:19.745 "name": "spare", 00:16:19.745 "uuid": "6f506b8f-59c3-528b-9056-3968672a27fa", 00:16:19.745 "is_configured": true, 00:16:19.745 "data_offset": 2048, 00:16:19.745 "data_size": 63488 00:16:19.745 }, 00:16:19.745 { 00:16:19.745 "name": "BaseBdev2", 00:16:19.745 "uuid": "d4714b06-0cdb-504e-b57a-3f0823cc3744", 00:16:19.745 "is_configured": true, 00:16:19.745 "data_offset": 2048, 00:16:19.745 "data_size": 63488 00:16:19.745 }, 00:16:19.745 { 00:16:19.745 "name": "BaseBdev3", 00:16:19.745 "uuid": "d96f8ef6-e555-560b-9b1a-a7e9b9294f0f", 00:16:19.745 "is_configured": true, 00:16:19.745 "data_offset": 2048, 00:16:19.745 "data_size": 63488 00:16:19.745 }, 00:16:19.745 { 00:16:19.745 "name": "BaseBdev4", 00:16:19.745 "uuid": "a47ab298-7424-5c42-aba3-cfa009801b4d", 00:16:19.745 "is_configured": true, 00:16:19.745 "data_offset": 2048, 00:16:19.745 "data_size": 63488 00:16:19.745 } 00:16:19.745 ] 00:16:19.745 }' 00:16:19.745 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.004 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:20.004 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.004 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:20.004 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:20.004 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.004 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.004 [2024-11-19 10:27:33.588586] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:20.004 [2024-11-19 10:27:33.647477] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:20.004 [2024-11-19 10:27:33.647533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.004 [2024-11-19 10:27:33.647548] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:20.004 [2024-11-19 10:27:33.647557] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:20.004 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.004 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:20.004 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.004 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.004 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.004 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.004 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:20.004 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.004 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.004 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.004 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.004 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.004 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.004 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.004 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.004 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.004 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.004 "name": "raid_bdev1", 00:16:20.004 "uuid": "d86f20c8-f9df-4bbf-bc54-f76da2569e4c", 00:16:20.004 "strip_size_kb": 64, 00:16:20.004 "state": "online", 00:16:20.004 "raid_level": "raid5f", 00:16:20.004 "superblock": true, 00:16:20.004 "num_base_bdevs": 4, 00:16:20.004 "num_base_bdevs_discovered": 3, 00:16:20.004 "num_base_bdevs_operational": 3, 00:16:20.004 "base_bdevs_list": [ 00:16:20.004 { 00:16:20.004 "name": null, 00:16:20.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.004 "is_configured": false, 00:16:20.004 "data_offset": 0, 00:16:20.004 "data_size": 63488 00:16:20.004 }, 00:16:20.004 { 00:16:20.004 "name": "BaseBdev2", 00:16:20.004 "uuid": "d4714b06-0cdb-504e-b57a-3f0823cc3744", 00:16:20.004 "is_configured": true, 00:16:20.004 "data_offset": 2048, 00:16:20.004 "data_size": 63488 00:16:20.004 }, 00:16:20.004 { 00:16:20.004 "name": "BaseBdev3", 00:16:20.004 "uuid": "d96f8ef6-e555-560b-9b1a-a7e9b9294f0f", 00:16:20.004 "is_configured": true, 00:16:20.004 "data_offset": 2048, 00:16:20.004 "data_size": 63488 00:16:20.004 }, 00:16:20.004 { 00:16:20.004 "name": "BaseBdev4", 00:16:20.004 "uuid": "a47ab298-7424-5c42-aba3-cfa009801b4d", 00:16:20.004 "is_configured": true, 00:16:20.004 "data_offset": 2048, 00:16:20.004 "data_size": 63488 00:16:20.004 } 00:16:20.004 ] 00:16:20.004 }' 00:16:20.004 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.004 10:27:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.572 10:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:20.572 10:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.572 10:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:20.572 10:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:20.572 10:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.572 10:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.572 10:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.572 10:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.572 10:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.572 10:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.572 10:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.572 "name": "raid_bdev1", 00:16:20.572 "uuid": "d86f20c8-f9df-4bbf-bc54-f76da2569e4c", 00:16:20.572 "strip_size_kb": 64, 00:16:20.572 "state": "online", 00:16:20.572 "raid_level": "raid5f", 00:16:20.572 "superblock": true, 00:16:20.572 "num_base_bdevs": 4, 00:16:20.572 "num_base_bdevs_discovered": 3, 00:16:20.572 "num_base_bdevs_operational": 3, 00:16:20.572 "base_bdevs_list": [ 00:16:20.572 { 00:16:20.572 "name": null, 00:16:20.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.572 "is_configured": false, 00:16:20.572 "data_offset": 0, 00:16:20.572 "data_size": 63488 00:16:20.572 }, 00:16:20.572 { 00:16:20.572 "name": "BaseBdev2", 00:16:20.572 "uuid": "d4714b06-0cdb-504e-b57a-3f0823cc3744", 00:16:20.572 "is_configured": true, 00:16:20.572 "data_offset": 2048, 00:16:20.572 "data_size": 63488 00:16:20.572 }, 00:16:20.572 { 00:16:20.572 "name": "BaseBdev3", 00:16:20.572 "uuid": "d96f8ef6-e555-560b-9b1a-a7e9b9294f0f", 00:16:20.572 "is_configured": true, 00:16:20.572 "data_offset": 2048, 00:16:20.572 "data_size": 63488 00:16:20.572 }, 00:16:20.572 { 00:16:20.572 "name": "BaseBdev4", 00:16:20.572 "uuid": "a47ab298-7424-5c42-aba3-cfa009801b4d", 00:16:20.572 "is_configured": true, 00:16:20.572 "data_offset": 2048, 00:16:20.572 "data_size": 63488 00:16:20.572 } 00:16:20.572 ] 00:16:20.572 }' 00:16:20.572 10:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.572 10:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:20.572 10:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.572 10:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:20.572 10:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:20.572 10:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.572 10:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.572 [2024-11-19 10:27:34.219442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:20.572 [2024-11-19 10:27:34.233795] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:16:20.572 10:27:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.572 10:27:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:20.572 [2024-11-19 10:27:34.242243] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:21.514 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.514 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.514 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.515 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.515 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.515 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.515 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.515 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.515 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.515 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.515 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.515 "name": "raid_bdev1", 00:16:21.515 "uuid": "d86f20c8-f9df-4bbf-bc54-f76da2569e4c", 00:16:21.515 "strip_size_kb": 64, 00:16:21.515 "state": "online", 00:16:21.515 "raid_level": "raid5f", 00:16:21.515 "superblock": true, 00:16:21.515 "num_base_bdevs": 4, 00:16:21.515 "num_base_bdevs_discovered": 4, 00:16:21.515 "num_base_bdevs_operational": 4, 00:16:21.515 "process": { 00:16:21.515 "type": "rebuild", 00:16:21.515 "target": "spare", 00:16:21.515 "progress": { 00:16:21.515 "blocks": 19200, 00:16:21.515 "percent": 10 00:16:21.515 } 00:16:21.515 }, 00:16:21.515 "base_bdevs_list": [ 00:16:21.515 { 00:16:21.515 "name": "spare", 00:16:21.515 "uuid": "6f506b8f-59c3-528b-9056-3968672a27fa", 00:16:21.515 "is_configured": true, 00:16:21.515 "data_offset": 2048, 00:16:21.515 "data_size": 63488 00:16:21.515 }, 00:16:21.515 { 00:16:21.515 "name": "BaseBdev2", 00:16:21.515 "uuid": "d4714b06-0cdb-504e-b57a-3f0823cc3744", 00:16:21.515 "is_configured": true, 00:16:21.515 "data_offset": 2048, 00:16:21.515 "data_size": 63488 00:16:21.515 }, 00:16:21.515 { 00:16:21.515 "name": "BaseBdev3", 00:16:21.515 "uuid": "d96f8ef6-e555-560b-9b1a-a7e9b9294f0f", 00:16:21.515 "is_configured": true, 00:16:21.515 "data_offset": 2048, 00:16:21.515 "data_size": 63488 00:16:21.515 }, 00:16:21.515 { 00:16:21.515 "name": "BaseBdev4", 00:16:21.515 "uuid": "a47ab298-7424-5c42-aba3-cfa009801b4d", 00:16:21.515 "is_configured": true, 00:16:21.515 "data_offset": 2048, 00:16:21.515 "data_size": 63488 00:16:21.515 } 00:16:21.515 ] 00:16:21.515 }' 00:16:21.775 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.775 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:21.775 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.775 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:21.775 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:21.775 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:21.775 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:21.775 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:21.775 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:21.775 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=620 00:16:21.775 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:21.775 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:21.775 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.775 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:21.775 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:21.775 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.775 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.775 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.775 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.775 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.775 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.775 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.775 "name": "raid_bdev1", 00:16:21.775 "uuid": "d86f20c8-f9df-4bbf-bc54-f76da2569e4c", 00:16:21.775 "strip_size_kb": 64, 00:16:21.775 "state": "online", 00:16:21.775 "raid_level": "raid5f", 00:16:21.775 "superblock": true, 00:16:21.775 "num_base_bdevs": 4, 00:16:21.775 "num_base_bdevs_discovered": 4, 00:16:21.775 "num_base_bdevs_operational": 4, 00:16:21.775 "process": { 00:16:21.775 "type": "rebuild", 00:16:21.775 "target": "spare", 00:16:21.775 "progress": { 00:16:21.775 "blocks": 21120, 00:16:21.775 "percent": 11 00:16:21.775 } 00:16:21.775 }, 00:16:21.775 "base_bdevs_list": [ 00:16:21.775 { 00:16:21.775 "name": "spare", 00:16:21.775 "uuid": "6f506b8f-59c3-528b-9056-3968672a27fa", 00:16:21.775 "is_configured": true, 00:16:21.775 "data_offset": 2048, 00:16:21.775 "data_size": 63488 00:16:21.775 }, 00:16:21.775 { 00:16:21.775 "name": "BaseBdev2", 00:16:21.775 "uuid": "d4714b06-0cdb-504e-b57a-3f0823cc3744", 00:16:21.775 "is_configured": true, 00:16:21.775 "data_offset": 2048, 00:16:21.775 "data_size": 63488 00:16:21.775 }, 00:16:21.775 { 00:16:21.775 "name": "BaseBdev3", 00:16:21.775 "uuid": "d96f8ef6-e555-560b-9b1a-a7e9b9294f0f", 00:16:21.775 "is_configured": true, 00:16:21.775 "data_offset": 2048, 00:16:21.775 "data_size": 63488 00:16:21.775 }, 00:16:21.775 { 00:16:21.775 "name": "BaseBdev4", 00:16:21.775 "uuid": "a47ab298-7424-5c42-aba3-cfa009801b4d", 00:16:21.775 "is_configured": true, 00:16:21.775 "data_offset": 2048, 00:16:21.775 "data_size": 63488 00:16:21.775 } 00:16:21.775 ] 00:16:21.775 }' 00:16:21.775 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.775 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:21.775 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.775 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:21.775 10:27:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:23.157 10:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:23.157 10:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:23.157 10:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.157 10:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:23.157 10:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:23.157 10:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.157 10:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.157 10:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.157 10:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.157 10:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.157 10:27:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.157 10:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.157 "name": "raid_bdev1", 00:16:23.157 "uuid": "d86f20c8-f9df-4bbf-bc54-f76da2569e4c", 00:16:23.157 "strip_size_kb": 64, 00:16:23.157 "state": "online", 00:16:23.157 "raid_level": "raid5f", 00:16:23.157 "superblock": true, 00:16:23.157 "num_base_bdevs": 4, 00:16:23.157 "num_base_bdevs_discovered": 4, 00:16:23.157 "num_base_bdevs_operational": 4, 00:16:23.157 "process": { 00:16:23.157 "type": "rebuild", 00:16:23.157 "target": "spare", 00:16:23.157 "progress": { 00:16:23.157 "blocks": 42240, 00:16:23.157 "percent": 22 00:16:23.157 } 00:16:23.157 }, 00:16:23.157 "base_bdevs_list": [ 00:16:23.157 { 00:16:23.157 "name": "spare", 00:16:23.157 "uuid": "6f506b8f-59c3-528b-9056-3968672a27fa", 00:16:23.157 "is_configured": true, 00:16:23.157 "data_offset": 2048, 00:16:23.157 "data_size": 63488 00:16:23.157 }, 00:16:23.157 { 00:16:23.157 "name": "BaseBdev2", 00:16:23.157 "uuid": "d4714b06-0cdb-504e-b57a-3f0823cc3744", 00:16:23.157 "is_configured": true, 00:16:23.157 "data_offset": 2048, 00:16:23.157 "data_size": 63488 00:16:23.157 }, 00:16:23.157 { 00:16:23.157 "name": "BaseBdev3", 00:16:23.157 "uuid": "d96f8ef6-e555-560b-9b1a-a7e9b9294f0f", 00:16:23.157 "is_configured": true, 00:16:23.157 "data_offset": 2048, 00:16:23.157 "data_size": 63488 00:16:23.157 }, 00:16:23.157 { 00:16:23.157 "name": "BaseBdev4", 00:16:23.157 "uuid": "a47ab298-7424-5c42-aba3-cfa009801b4d", 00:16:23.157 "is_configured": true, 00:16:23.157 "data_offset": 2048, 00:16:23.157 "data_size": 63488 00:16:23.157 } 00:16:23.157 ] 00:16:23.157 }' 00:16:23.157 10:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.157 10:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:23.157 10:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.157 10:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:23.157 10:27:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:24.096 10:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:24.096 10:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:24.096 10:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.096 10:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:24.096 10:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:24.096 10:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.096 10:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.096 10:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.096 10:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.097 10:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.097 10:27:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.097 10:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.097 "name": "raid_bdev1", 00:16:24.097 "uuid": "d86f20c8-f9df-4bbf-bc54-f76da2569e4c", 00:16:24.097 "strip_size_kb": 64, 00:16:24.097 "state": "online", 00:16:24.097 "raid_level": "raid5f", 00:16:24.097 "superblock": true, 00:16:24.097 "num_base_bdevs": 4, 00:16:24.097 "num_base_bdevs_discovered": 4, 00:16:24.097 "num_base_bdevs_operational": 4, 00:16:24.097 "process": { 00:16:24.097 "type": "rebuild", 00:16:24.097 "target": "spare", 00:16:24.097 "progress": { 00:16:24.097 "blocks": 65280, 00:16:24.097 "percent": 34 00:16:24.097 } 00:16:24.097 }, 00:16:24.097 "base_bdevs_list": [ 00:16:24.097 { 00:16:24.097 "name": "spare", 00:16:24.097 "uuid": "6f506b8f-59c3-528b-9056-3968672a27fa", 00:16:24.097 "is_configured": true, 00:16:24.097 "data_offset": 2048, 00:16:24.097 "data_size": 63488 00:16:24.097 }, 00:16:24.097 { 00:16:24.097 "name": "BaseBdev2", 00:16:24.097 "uuid": "d4714b06-0cdb-504e-b57a-3f0823cc3744", 00:16:24.097 "is_configured": true, 00:16:24.097 "data_offset": 2048, 00:16:24.097 "data_size": 63488 00:16:24.097 }, 00:16:24.097 { 00:16:24.097 "name": "BaseBdev3", 00:16:24.097 "uuid": "d96f8ef6-e555-560b-9b1a-a7e9b9294f0f", 00:16:24.097 "is_configured": true, 00:16:24.097 "data_offset": 2048, 00:16:24.097 "data_size": 63488 00:16:24.097 }, 00:16:24.097 { 00:16:24.097 "name": "BaseBdev4", 00:16:24.097 "uuid": "a47ab298-7424-5c42-aba3-cfa009801b4d", 00:16:24.097 "is_configured": true, 00:16:24.097 "data_offset": 2048, 00:16:24.097 "data_size": 63488 00:16:24.097 } 00:16:24.097 ] 00:16:24.097 }' 00:16:24.097 10:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.097 10:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:24.097 10:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.097 10:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:24.097 10:27:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:25.036 10:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:25.036 10:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:25.036 10:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.036 10:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:25.036 10:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:25.036 10:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.296 10:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.296 10:27:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.296 10:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.296 10:27:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.296 10:27:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.296 10:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.296 "name": "raid_bdev1", 00:16:25.296 "uuid": "d86f20c8-f9df-4bbf-bc54-f76da2569e4c", 00:16:25.296 "strip_size_kb": 64, 00:16:25.296 "state": "online", 00:16:25.296 "raid_level": "raid5f", 00:16:25.296 "superblock": true, 00:16:25.296 "num_base_bdevs": 4, 00:16:25.296 "num_base_bdevs_discovered": 4, 00:16:25.296 "num_base_bdevs_operational": 4, 00:16:25.296 "process": { 00:16:25.296 "type": "rebuild", 00:16:25.296 "target": "spare", 00:16:25.296 "progress": { 00:16:25.296 "blocks": 86400, 00:16:25.296 "percent": 45 00:16:25.296 } 00:16:25.296 }, 00:16:25.296 "base_bdevs_list": [ 00:16:25.296 { 00:16:25.296 "name": "spare", 00:16:25.296 "uuid": "6f506b8f-59c3-528b-9056-3968672a27fa", 00:16:25.296 "is_configured": true, 00:16:25.296 "data_offset": 2048, 00:16:25.296 "data_size": 63488 00:16:25.296 }, 00:16:25.296 { 00:16:25.296 "name": "BaseBdev2", 00:16:25.296 "uuid": "d4714b06-0cdb-504e-b57a-3f0823cc3744", 00:16:25.296 "is_configured": true, 00:16:25.296 "data_offset": 2048, 00:16:25.296 "data_size": 63488 00:16:25.296 }, 00:16:25.296 { 00:16:25.296 "name": "BaseBdev3", 00:16:25.297 "uuid": "d96f8ef6-e555-560b-9b1a-a7e9b9294f0f", 00:16:25.297 "is_configured": true, 00:16:25.297 "data_offset": 2048, 00:16:25.297 "data_size": 63488 00:16:25.297 }, 00:16:25.297 { 00:16:25.297 "name": "BaseBdev4", 00:16:25.297 "uuid": "a47ab298-7424-5c42-aba3-cfa009801b4d", 00:16:25.297 "is_configured": true, 00:16:25.297 "data_offset": 2048, 00:16:25.297 "data_size": 63488 00:16:25.297 } 00:16:25.297 ] 00:16:25.297 }' 00:16:25.297 10:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.297 10:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:25.297 10:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.297 10:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:25.297 10:27:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:26.237 10:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:26.237 10:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:26.237 10:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.237 10:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:26.237 10:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:26.237 10:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.237 10:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.237 10:27:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.237 10:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.237 10:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.237 10:27:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.497 10:27:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.497 "name": "raid_bdev1", 00:16:26.497 "uuid": "d86f20c8-f9df-4bbf-bc54-f76da2569e4c", 00:16:26.497 "strip_size_kb": 64, 00:16:26.497 "state": "online", 00:16:26.497 "raid_level": "raid5f", 00:16:26.497 "superblock": true, 00:16:26.497 "num_base_bdevs": 4, 00:16:26.497 "num_base_bdevs_discovered": 4, 00:16:26.497 "num_base_bdevs_operational": 4, 00:16:26.497 "process": { 00:16:26.497 "type": "rebuild", 00:16:26.497 "target": "spare", 00:16:26.497 "progress": { 00:16:26.497 "blocks": 109440, 00:16:26.497 "percent": 57 00:16:26.497 } 00:16:26.497 }, 00:16:26.497 "base_bdevs_list": [ 00:16:26.497 { 00:16:26.497 "name": "spare", 00:16:26.497 "uuid": "6f506b8f-59c3-528b-9056-3968672a27fa", 00:16:26.497 "is_configured": true, 00:16:26.497 "data_offset": 2048, 00:16:26.497 "data_size": 63488 00:16:26.497 }, 00:16:26.497 { 00:16:26.497 "name": "BaseBdev2", 00:16:26.497 "uuid": "d4714b06-0cdb-504e-b57a-3f0823cc3744", 00:16:26.497 "is_configured": true, 00:16:26.497 "data_offset": 2048, 00:16:26.497 "data_size": 63488 00:16:26.497 }, 00:16:26.497 { 00:16:26.497 "name": "BaseBdev3", 00:16:26.497 "uuid": "d96f8ef6-e555-560b-9b1a-a7e9b9294f0f", 00:16:26.497 "is_configured": true, 00:16:26.497 "data_offset": 2048, 00:16:26.497 "data_size": 63488 00:16:26.497 }, 00:16:26.497 { 00:16:26.497 "name": "BaseBdev4", 00:16:26.497 "uuid": "a47ab298-7424-5c42-aba3-cfa009801b4d", 00:16:26.497 "is_configured": true, 00:16:26.497 "data_offset": 2048, 00:16:26.497 "data_size": 63488 00:16:26.497 } 00:16:26.497 ] 00:16:26.497 }' 00:16:26.497 10:27:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.497 10:27:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:26.497 10:27:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.497 10:27:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:26.497 10:27:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:27.436 10:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:27.436 10:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:27.436 10:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.436 10:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:27.436 10:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:27.436 10:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.436 10:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.436 10:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.436 10:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.436 10:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.436 10:27:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.436 10:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.436 "name": "raid_bdev1", 00:16:27.436 "uuid": "d86f20c8-f9df-4bbf-bc54-f76da2569e4c", 00:16:27.436 "strip_size_kb": 64, 00:16:27.436 "state": "online", 00:16:27.436 "raid_level": "raid5f", 00:16:27.436 "superblock": true, 00:16:27.436 "num_base_bdevs": 4, 00:16:27.436 "num_base_bdevs_discovered": 4, 00:16:27.436 "num_base_bdevs_operational": 4, 00:16:27.436 "process": { 00:16:27.436 "type": "rebuild", 00:16:27.436 "target": "spare", 00:16:27.436 "progress": { 00:16:27.436 "blocks": 130560, 00:16:27.436 "percent": 68 00:16:27.436 } 00:16:27.436 }, 00:16:27.436 "base_bdevs_list": [ 00:16:27.436 { 00:16:27.436 "name": "spare", 00:16:27.436 "uuid": "6f506b8f-59c3-528b-9056-3968672a27fa", 00:16:27.436 "is_configured": true, 00:16:27.436 "data_offset": 2048, 00:16:27.436 "data_size": 63488 00:16:27.436 }, 00:16:27.436 { 00:16:27.436 "name": "BaseBdev2", 00:16:27.436 "uuid": "d4714b06-0cdb-504e-b57a-3f0823cc3744", 00:16:27.436 "is_configured": true, 00:16:27.436 "data_offset": 2048, 00:16:27.436 "data_size": 63488 00:16:27.436 }, 00:16:27.437 { 00:16:27.437 "name": "BaseBdev3", 00:16:27.437 "uuid": "d96f8ef6-e555-560b-9b1a-a7e9b9294f0f", 00:16:27.437 "is_configured": true, 00:16:27.437 "data_offset": 2048, 00:16:27.437 "data_size": 63488 00:16:27.437 }, 00:16:27.437 { 00:16:27.437 "name": "BaseBdev4", 00:16:27.437 "uuid": "a47ab298-7424-5c42-aba3-cfa009801b4d", 00:16:27.437 "is_configured": true, 00:16:27.437 "data_offset": 2048, 00:16:27.437 "data_size": 63488 00:16:27.437 } 00:16:27.437 ] 00:16:27.437 }' 00:16:27.437 10:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.437 10:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:27.437 10:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.697 10:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:27.697 10:27:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:28.637 10:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:28.637 10:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:28.637 10:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:28.637 10:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:28.637 10:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:28.637 10:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:28.637 10:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.637 10:27:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.637 10:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.637 10:27:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:28.637 10:27:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.637 10:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:28.637 "name": "raid_bdev1", 00:16:28.637 "uuid": "d86f20c8-f9df-4bbf-bc54-f76da2569e4c", 00:16:28.637 "strip_size_kb": 64, 00:16:28.637 "state": "online", 00:16:28.637 "raid_level": "raid5f", 00:16:28.637 "superblock": true, 00:16:28.637 "num_base_bdevs": 4, 00:16:28.637 "num_base_bdevs_discovered": 4, 00:16:28.637 "num_base_bdevs_operational": 4, 00:16:28.637 "process": { 00:16:28.637 "type": "rebuild", 00:16:28.637 "target": "spare", 00:16:28.637 "progress": { 00:16:28.637 "blocks": 153600, 00:16:28.637 "percent": 80 00:16:28.637 } 00:16:28.637 }, 00:16:28.637 "base_bdevs_list": [ 00:16:28.637 { 00:16:28.637 "name": "spare", 00:16:28.637 "uuid": "6f506b8f-59c3-528b-9056-3968672a27fa", 00:16:28.637 "is_configured": true, 00:16:28.637 "data_offset": 2048, 00:16:28.637 "data_size": 63488 00:16:28.637 }, 00:16:28.637 { 00:16:28.637 "name": "BaseBdev2", 00:16:28.637 "uuid": "d4714b06-0cdb-504e-b57a-3f0823cc3744", 00:16:28.637 "is_configured": true, 00:16:28.637 "data_offset": 2048, 00:16:28.637 "data_size": 63488 00:16:28.637 }, 00:16:28.637 { 00:16:28.637 "name": "BaseBdev3", 00:16:28.637 "uuid": "d96f8ef6-e555-560b-9b1a-a7e9b9294f0f", 00:16:28.637 "is_configured": true, 00:16:28.637 "data_offset": 2048, 00:16:28.637 "data_size": 63488 00:16:28.637 }, 00:16:28.637 { 00:16:28.637 "name": "BaseBdev4", 00:16:28.637 "uuid": "a47ab298-7424-5c42-aba3-cfa009801b4d", 00:16:28.637 "is_configured": true, 00:16:28.637 "data_offset": 2048, 00:16:28.637 "data_size": 63488 00:16:28.637 } 00:16:28.637 ] 00:16:28.637 }' 00:16:28.637 10:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:28.637 10:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:28.637 10:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:28.897 10:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:28.897 10:27:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:29.837 10:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:29.837 10:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.837 10:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.837 10:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.837 10:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.837 10:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.837 10:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.837 10:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.837 10:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.837 10:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.837 10:27:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.837 10:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.837 "name": "raid_bdev1", 00:16:29.837 "uuid": "d86f20c8-f9df-4bbf-bc54-f76da2569e4c", 00:16:29.837 "strip_size_kb": 64, 00:16:29.837 "state": "online", 00:16:29.837 "raid_level": "raid5f", 00:16:29.837 "superblock": true, 00:16:29.837 "num_base_bdevs": 4, 00:16:29.837 "num_base_bdevs_discovered": 4, 00:16:29.837 "num_base_bdevs_operational": 4, 00:16:29.837 "process": { 00:16:29.837 "type": "rebuild", 00:16:29.837 "target": "spare", 00:16:29.837 "progress": { 00:16:29.837 "blocks": 174720, 00:16:29.837 "percent": 91 00:16:29.837 } 00:16:29.837 }, 00:16:29.837 "base_bdevs_list": [ 00:16:29.837 { 00:16:29.837 "name": "spare", 00:16:29.837 "uuid": "6f506b8f-59c3-528b-9056-3968672a27fa", 00:16:29.837 "is_configured": true, 00:16:29.837 "data_offset": 2048, 00:16:29.837 "data_size": 63488 00:16:29.837 }, 00:16:29.837 { 00:16:29.837 "name": "BaseBdev2", 00:16:29.837 "uuid": "d4714b06-0cdb-504e-b57a-3f0823cc3744", 00:16:29.837 "is_configured": true, 00:16:29.837 "data_offset": 2048, 00:16:29.837 "data_size": 63488 00:16:29.837 }, 00:16:29.837 { 00:16:29.837 "name": "BaseBdev3", 00:16:29.837 "uuid": "d96f8ef6-e555-560b-9b1a-a7e9b9294f0f", 00:16:29.837 "is_configured": true, 00:16:29.837 "data_offset": 2048, 00:16:29.837 "data_size": 63488 00:16:29.837 }, 00:16:29.837 { 00:16:29.837 "name": "BaseBdev4", 00:16:29.837 "uuid": "a47ab298-7424-5c42-aba3-cfa009801b4d", 00:16:29.837 "is_configured": true, 00:16:29.837 "data_offset": 2048, 00:16:29.837 "data_size": 63488 00:16:29.837 } 00:16:29.837 ] 00:16:29.837 }' 00:16:29.837 10:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.837 10:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:29.837 10:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.837 10:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.837 10:27:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:30.778 [2024-11-19 10:27:44.282483] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:30.778 [2024-11-19 10:27:44.282619] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:30.778 [2024-11-19 10:27:44.282803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.039 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:31.039 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.039 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.039 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.039 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.039 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.039 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.039 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.039 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.039 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.039 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.039 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.039 "name": "raid_bdev1", 00:16:31.039 "uuid": "d86f20c8-f9df-4bbf-bc54-f76da2569e4c", 00:16:31.039 "strip_size_kb": 64, 00:16:31.039 "state": "online", 00:16:31.039 "raid_level": "raid5f", 00:16:31.039 "superblock": true, 00:16:31.039 "num_base_bdevs": 4, 00:16:31.039 "num_base_bdevs_discovered": 4, 00:16:31.039 "num_base_bdevs_operational": 4, 00:16:31.039 "base_bdevs_list": [ 00:16:31.039 { 00:16:31.039 "name": "spare", 00:16:31.039 "uuid": "6f506b8f-59c3-528b-9056-3968672a27fa", 00:16:31.039 "is_configured": true, 00:16:31.039 "data_offset": 2048, 00:16:31.039 "data_size": 63488 00:16:31.039 }, 00:16:31.039 { 00:16:31.039 "name": "BaseBdev2", 00:16:31.039 "uuid": "d4714b06-0cdb-504e-b57a-3f0823cc3744", 00:16:31.039 "is_configured": true, 00:16:31.039 "data_offset": 2048, 00:16:31.039 "data_size": 63488 00:16:31.039 }, 00:16:31.039 { 00:16:31.039 "name": "BaseBdev3", 00:16:31.039 "uuid": "d96f8ef6-e555-560b-9b1a-a7e9b9294f0f", 00:16:31.039 "is_configured": true, 00:16:31.039 "data_offset": 2048, 00:16:31.039 "data_size": 63488 00:16:31.039 }, 00:16:31.039 { 00:16:31.039 "name": "BaseBdev4", 00:16:31.039 "uuid": "a47ab298-7424-5c42-aba3-cfa009801b4d", 00:16:31.039 "is_configured": true, 00:16:31.039 "data_offset": 2048, 00:16:31.039 "data_size": 63488 00:16:31.039 } 00:16:31.039 ] 00:16:31.039 }' 00:16:31.039 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.039 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:31.039 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.039 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:31.039 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:31.039 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:31.039 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.039 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:31.039 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:31.039 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.039 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.039 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.039 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.039 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.039 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.039 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.039 "name": "raid_bdev1", 00:16:31.039 "uuid": "d86f20c8-f9df-4bbf-bc54-f76da2569e4c", 00:16:31.039 "strip_size_kb": 64, 00:16:31.039 "state": "online", 00:16:31.039 "raid_level": "raid5f", 00:16:31.039 "superblock": true, 00:16:31.039 "num_base_bdevs": 4, 00:16:31.039 "num_base_bdevs_discovered": 4, 00:16:31.039 "num_base_bdevs_operational": 4, 00:16:31.039 "base_bdevs_list": [ 00:16:31.039 { 00:16:31.039 "name": "spare", 00:16:31.039 "uuid": "6f506b8f-59c3-528b-9056-3968672a27fa", 00:16:31.039 "is_configured": true, 00:16:31.039 "data_offset": 2048, 00:16:31.039 "data_size": 63488 00:16:31.039 }, 00:16:31.039 { 00:16:31.039 "name": "BaseBdev2", 00:16:31.039 "uuid": "d4714b06-0cdb-504e-b57a-3f0823cc3744", 00:16:31.039 "is_configured": true, 00:16:31.039 "data_offset": 2048, 00:16:31.039 "data_size": 63488 00:16:31.039 }, 00:16:31.039 { 00:16:31.039 "name": "BaseBdev3", 00:16:31.039 "uuid": "d96f8ef6-e555-560b-9b1a-a7e9b9294f0f", 00:16:31.039 "is_configured": true, 00:16:31.039 "data_offset": 2048, 00:16:31.039 "data_size": 63488 00:16:31.039 }, 00:16:31.039 { 00:16:31.039 "name": "BaseBdev4", 00:16:31.039 "uuid": "a47ab298-7424-5c42-aba3-cfa009801b4d", 00:16:31.039 "is_configured": true, 00:16:31.039 "data_offset": 2048, 00:16:31.039 "data_size": 63488 00:16:31.039 } 00:16:31.039 ] 00:16:31.039 }' 00:16:31.039 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:31.039 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:31.039 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:31.320 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:31.320 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:31.320 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:31.320 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.320 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:31.320 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:31.320 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:31.320 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.320 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.320 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.320 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.320 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.320 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.320 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.320 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.320 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.320 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.320 "name": "raid_bdev1", 00:16:31.320 "uuid": "d86f20c8-f9df-4bbf-bc54-f76da2569e4c", 00:16:31.320 "strip_size_kb": 64, 00:16:31.320 "state": "online", 00:16:31.320 "raid_level": "raid5f", 00:16:31.320 "superblock": true, 00:16:31.320 "num_base_bdevs": 4, 00:16:31.320 "num_base_bdevs_discovered": 4, 00:16:31.320 "num_base_bdevs_operational": 4, 00:16:31.320 "base_bdevs_list": [ 00:16:31.320 { 00:16:31.320 "name": "spare", 00:16:31.320 "uuid": "6f506b8f-59c3-528b-9056-3968672a27fa", 00:16:31.320 "is_configured": true, 00:16:31.320 "data_offset": 2048, 00:16:31.320 "data_size": 63488 00:16:31.320 }, 00:16:31.320 { 00:16:31.320 "name": "BaseBdev2", 00:16:31.320 "uuid": "d4714b06-0cdb-504e-b57a-3f0823cc3744", 00:16:31.320 "is_configured": true, 00:16:31.320 "data_offset": 2048, 00:16:31.320 "data_size": 63488 00:16:31.320 }, 00:16:31.320 { 00:16:31.320 "name": "BaseBdev3", 00:16:31.320 "uuid": "d96f8ef6-e555-560b-9b1a-a7e9b9294f0f", 00:16:31.320 "is_configured": true, 00:16:31.320 "data_offset": 2048, 00:16:31.320 "data_size": 63488 00:16:31.320 }, 00:16:31.320 { 00:16:31.321 "name": "BaseBdev4", 00:16:31.321 "uuid": "a47ab298-7424-5c42-aba3-cfa009801b4d", 00:16:31.321 "is_configured": true, 00:16:31.321 "data_offset": 2048, 00:16:31.321 "data_size": 63488 00:16:31.321 } 00:16:31.321 ] 00:16:31.321 }' 00:16:31.321 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.321 10:27:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.581 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:31.581 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.581 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.581 [2024-11-19 10:27:45.307054] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:31.581 [2024-11-19 10:27:45.307081] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:31.581 [2024-11-19 10:27:45.307156] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:31.581 [2024-11-19 10:27:45.307244] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:31.581 [2024-11-19 10:27:45.307264] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:31.581 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.581 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.581 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:31.581 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.581 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.581 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.581 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:31.581 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:31.581 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:31.581 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:31.581 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:31.842 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:31.842 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:31.842 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:31.842 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:31.842 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:31.842 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:31.842 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:31.842 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:31.842 /dev/nbd0 00:16:31.842 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:31.842 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:31.842 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:31.842 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:31.842 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:31.842 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:31.842 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:31.842 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:31.842 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:31.842 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:31.842 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:31.842 1+0 records in 00:16:31.842 1+0 records out 00:16:31.842 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344326 s, 11.9 MB/s 00:16:31.842 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:31.842 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:31.842 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:31.842 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:31.842 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:31.842 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:31.842 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:31.842 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:32.103 /dev/nbd1 00:16:32.103 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:32.103 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:32.103 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:32.103 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:32.103 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:32.103 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:32.103 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:32.103 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:32.103 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:32.103 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:32.103 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:32.103 1+0 records in 00:16:32.103 1+0 records out 00:16:32.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422097 s, 9.7 MB/s 00:16:32.103 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:32.103 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:32.103 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:32.103 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:32.103 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:32.103 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:32.103 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:32.103 10:27:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:32.363 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:32.363 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:32.363 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:32.363 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:32.363 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:32.363 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:32.363 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:32.623 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:32.623 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:32.623 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:32.623 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:32.623 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:32.623 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:32.623 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:32.623 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:32.623 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:32.623 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.883 [2024-11-19 10:27:46.509481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:32.883 [2024-11-19 10:27:46.509545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.883 [2024-11-19 10:27:46.509571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:32.883 [2024-11-19 10:27:46.509581] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.883 [2024-11-19 10:27:46.511798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.883 [2024-11-19 10:27:46.511842] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:32.883 [2024-11-19 10:27:46.511934] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:32.883 [2024-11-19 10:27:46.511990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:32.883 [2024-11-19 10:27:46.512143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:32.883 [2024-11-19 10:27:46.512240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:32.883 [2024-11-19 10:27:46.512311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:32.883 spare 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.883 [2024-11-19 10:27:46.612216] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:32.883 [2024-11-19 10:27:46.612249] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:32.883 [2024-11-19 10:27:46.612511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:16:32.883 [2024-11-19 10:27:46.619013] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:32.883 [2024-11-19 10:27:46.619036] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:32.883 [2024-11-19 10:27:46.619218] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:32.883 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.144 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.144 "name": "raid_bdev1", 00:16:33.144 "uuid": "d86f20c8-f9df-4bbf-bc54-f76da2569e4c", 00:16:33.144 "strip_size_kb": 64, 00:16:33.144 "state": "online", 00:16:33.144 "raid_level": "raid5f", 00:16:33.144 "superblock": true, 00:16:33.144 "num_base_bdevs": 4, 00:16:33.144 "num_base_bdevs_discovered": 4, 00:16:33.144 "num_base_bdevs_operational": 4, 00:16:33.144 "base_bdevs_list": [ 00:16:33.144 { 00:16:33.144 "name": "spare", 00:16:33.144 "uuid": "6f506b8f-59c3-528b-9056-3968672a27fa", 00:16:33.144 "is_configured": true, 00:16:33.144 "data_offset": 2048, 00:16:33.144 "data_size": 63488 00:16:33.144 }, 00:16:33.144 { 00:16:33.144 "name": "BaseBdev2", 00:16:33.144 "uuid": "d4714b06-0cdb-504e-b57a-3f0823cc3744", 00:16:33.144 "is_configured": true, 00:16:33.144 "data_offset": 2048, 00:16:33.144 "data_size": 63488 00:16:33.144 }, 00:16:33.144 { 00:16:33.144 "name": "BaseBdev3", 00:16:33.144 "uuid": "d96f8ef6-e555-560b-9b1a-a7e9b9294f0f", 00:16:33.144 "is_configured": true, 00:16:33.144 "data_offset": 2048, 00:16:33.144 "data_size": 63488 00:16:33.144 }, 00:16:33.144 { 00:16:33.144 "name": "BaseBdev4", 00:16:33.144 "uuid": "a47ab298-7424-5c42-aba3-cfa009801b4d", 00:16:33.144 "is_configured": true, 00:16:33.144 "data_offset": 2048, 00:16:33.144 "data_size": 63488 00:16:33.144 } 00:16:33.144 ] 00:16:33.144 }' 00:16:33.144 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.144 10:27:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.405 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:33.405 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.405 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:33.405 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:33.405 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.405 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.405 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.405 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.405 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.405 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.405 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.405 "name": "raid_bdev1", 00:16:33.405 "uuid": "d86f20c8-f9df-4bbf-bc54-f76da2569e4c", 00:16:33.405 "strip_size_kb": 64, 00:16:33.405 "state": "online", 00:16:33.405 "raid_level": "raid5f", 00:16:33.405 "superblock": true, 00:16:33.405 "num_base_bdevs": 4, 00:16:33.405 "num_base_bdevs_discovered": 4, 00:16:33.405 "num_base_bdevs_operational": 4, 00:16:33.405 "base_bdevs_list": [ 00:16:33.405 { 00:16:33.405 "name": "spare", 00:16:33.405 "uuid": "6f506b8f-59c3-528b-9056-3968672a27fa", 00:16:33.405 "is_configured": true, 00:16:33.405 "data_offset": 2048, 00:16:33.405 "data_size": 63488 00:16:33.405 }, 00:16:33.405 { 00:16:33.405 "name": "BaseBdev2", 00:16:33.405 "uuid": "d4714b06-0cdb-504e-b57a-3f0823cc3744", 00:16:33.405 "is_configured": true, 00:16:33.405 "data_offset": 2048, 00:16:33.405 "data_size": 63488 00:16:33.405 }, 00:16:33.405 { 00:16:33.405 "name": "BaseBdev3", 00:16:33.405 "uuid": "d96f8ef6-e555-560b-9b1a-a7e9b9294f0f", 00:16:33.405 "is_configured": true, 00:16:33.405 "data_offset": 2048, 00:16:33.405 "data_size": 63488 00:16:33.405 }, 00:16:33.405 { 00:16:33.405 "name": "BaseBdev4", 00:16:33.405 "uuid": "a47ab298-7424-5c42-aba3-cfa009801b4d", 00:16:33.405 "is_configured": true, 00:16:33.405 "data_offset": 2048, 00:16:33.405 "data_size": 63488 00:16:33.405 } 00:16:33.405 ] 00:16:33.405 }' 00:16:33.405 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.405 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:33.405 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.666 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:33.666 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:33.666 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.666 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.666 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.666 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.666 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.666 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:33.666 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.666 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.666 [2024-11-19 10:27:47.270342] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:33.666 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.666 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:33.666 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.666 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.666 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.666 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.666 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:33.666 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.666 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.666 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.666 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.666 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.666 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.666 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.666 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.666 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.666 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.666 "name": "raid_bdev1", 00:16:33.666 "uuid": "d86f20c8-f9df-4bbf-bc54-f76da2569e4c", 00:16:33.666 "strip_size_kb": 64, 00:16:33.666 "state": "online", 00:16:33.666 "raid_level": "raid5f", 00:16:33.666 "superblock": true, 00:16:33.666 "num_base_bdevs": 4, 00:16:33.666 "num_base_bdevs_discovered": 3, 00:16:33.666 "num_base_bdevs_operational": 3, 00:16:33.666 "base_bdevs_list": [ 00:16:33.666 { 00:16:33.666 "name": null, 00:16:33.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.666 "is_configured": false, 00:16:33.666 "data_offset": 0, 00:16:33.666 "data_size": 63488 00:16:33.666 }, 00:16:33.666 { 00:16:33.666 "name": "BaseBdev2", 00:16:33.666 "uuid": "d4714b06-0cdb-504e-b57a-3f0823cc3744", 00:16:33.666 "is_configured": true, 00:16:33.666 "data_offset": 2048, 00:16:33.666 "data_size": 63488 00:16:33.666 }, 00:16:33.666 { 00:16:33.666 "name": "BaseBdev3", 00:16:33.666 "uuid": "d96f8ef6-e555-560b-9b1a-a7e9b9294f0f", 00:16:33.666 "is_configured": true, 00:16:33.666 "data_offset": 2048, 00:16:33.666 "data_size": 63488 00:16:33.666 }, 00:16:33.666 { 00:16:33.666 "name": "BaseBdev4", 00:16:33.666 "uuid": "a47ab298-7424-5c42-aba3-cfa009801b4d", 00:16:33.666 "is_configured": true, 00:16:33.666 "data_offset": 2048, 00:16:33.666 "data_size": 63488 00:16:33.666 } 00:16:33.666 ] 00:16:33.666 }' 00:16:33.666 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.666 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.926 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:33.926 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.926 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.926 [2024-11-19 10:27:47.681649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:33.926 [2024-11-19 10:27:47.681826] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:33.926 [2024-11-19 10:27:47.681852] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:33.926 [2024-11-19 10:27:47.681885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:33.926 [2024-11-19 10:27:47.696146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:16:33.926 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.926 10:27:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:33.926 [2024-11-19 10:27:47.704780] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.309 "name": "raid_bdev1", 00:16:35.309 "uuid": "d86f20c8-f9df-4bbf-bc54-f76da2569e4c", 00:16:35.309 "strip_size_kb": 64, 00:16:35.309 "state": "online", 00:16:35.309 "raid_level": "raid5f", 00:16:35.309 "superblock": true, 00:16:35.309 "num_base_bdevs": 4, 00:16:35.309 "num_base_bdevs_discovered": 4, 00:16:35.309 "num_base_bdevs_operational": 4, 00:16:35.309 "process": { 00:16:35.309 "type": "rebuild", 00:16:35.309 "target": "spare", 00:16:35.309 "progress": { 00:16:35.309 "blocks": 19200, 00:16:35.309 "percent": 10 00:16:35.309 } 00:16:35.309 }, 00:16:35.309 "base_bdevs_list": [ 00:16:35.309 { 00:16:35.309 "name": "spare", 00:16:35.309 "uuid": "6f506b8f-59c3-528b-9056-3968672a27fa", 00:16:35.309 "is_configured": true, 00:16:35.309 "data_offset": 2048, 00:16:35.309 "data_size": 63488 00:16:35.309 }, 00:16:35.309 { 00:16:35.309 "name": "BaseBdev2", 00:16:35.309 "uuid": "d4714b06-0cdb-504e-b57a-3f0823cc3744", 00:16:35.309 "is_configured": true, 00:16:35.309 "data_offset": 2048, 00:16:35.309 "data_size": 63488 00:16:35.309 }, 00:16:35.309 { 00:16:35.309 "name": "BaseBdev3", 00:16:35.309 "uuid": "d96f8ef6-e555-560b-9b1a-a7e9b9294f0f", 00:16:35.309 "is_configured": true, 00:16:35.309 "data_offset": 2048, 00:16:35.309 "data_size": 63488 00:16:35.309 }, 00:16:35.309 { 00:16:35.309 "name": "BaseBdev4", 00:16:35.309 "uuid": "a47ab298-7424-5c42-aba3-cfa009801b4d", 00:16:35.309 "is_configured": true, 00:16:35.309 "data_offset": 2048, 00:16:35.309 "data_size": 63488 00:16:35.309 } 00:16:35.309 ] 00:16:35.309 }' 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.309 [2024-11-19 10:27:48.835482] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:35.309 [2024-11-19 10:27:48.910204] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:35.309 [2024-11-19 10:27:48.910269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.309 [2024-11-19 10:27:48.910301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:35.309 [2024-11-19 10:27:48.910309] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.309 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.309 "name": "raid_bdev1", 00:16:35.309 "uuid": "d86f20c8-f9df-4bbf-bc54-f76da2569e4c", 00:16:35.309 "strip_size_kb": 64, 00:16:35.309 "state": "online", 00:16:35.309 "raid_level": "raid5f", 00:16:35.309 "superblock": true, 00:16:35.309 "num_base_bdevs": 4, 00:16:35.309 "num_base_bdevs_discovered": 3, 00:16:35.309 "num_base_bdevs_operational": 3, 00:16:35.309 "base_bdevs_list": [ 00:16:35.309 { 00:16:35.309 "name": null, 00:16:35.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.309 "is_configured": false, 00:16:35.309 "data_offset": 0, 00:16:35.309 "data_size": 63488 00:16:35.309 }, 00:16:35.309 { 00:16:35.309 "name": "BaseBdev2", 00:16:35.309 "uuid": "d4714b06-0cdb-504e-b57a-3f0823cc3744", 00:16:35.309 "is_configured": true, 00:16:35.309 "data_offset": 2048, 00:16:35.309 "data_size": 63488 00:16:35.310 }, 00:16:35.310 { 00:16:35.310 "name": "BaseBdev3", 00:16:35.310 "uuid": "d96f8ef6-e555-560b-9b1a-a7e9b9294f0f", 00:16:35.310 "is_configured": true, 00:16:35.310 "data_offset": 2048, 00:16:35.310 "data_size": 63488 00:16:35.310 }, 00:16:35.310 { 00:16:35.310 "name": "BaseBdev4", 00:16:35.310 "uuid": "a47ab298-7424-5c42-aba3-cfa009801b4d", 00:16:35.310 "is_configured": true, 00:16:35.310 "data_offset": 2048, 00:16:35.310 "data_size": 63488 00:16:35.310 } 00:16:35.310 ] 00:16:35.310 }' 00:16:35.310 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.310 10:27:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.878 10:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:35.878 10:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.878 10:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.878 [2024-11-19 10:27:49.442106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:35.878 [2024-11-19 10:27:49.442169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.878 [2024-11-19 10:27:49.442196] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:16:35.878 [2024-11-19 10:27:49.442208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.878 [2024-11-19 10:27:49.442688] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.878 [2024-11-19 10:27:49.442718] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:35.878 [2024-11-19 10:27:49.442803] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:35.878 [2024-11-19 10:27:49.442830] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:35.878 [2024-11-19 10:27:49.442840] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:35.878 [2024-11-19 10:27:49.442866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:35.878 [2024-11-19 10:27:49.456861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:16:35.878 spare 00:16:35.878 10:27:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.878 10:27:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:35.878 [2024-11-19 10:27:49.465972] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:36.817 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:36.817 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.817 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:36.817 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:36.817 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.817 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.817 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.817 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.817 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.817 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.817 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.817 "name": "raid_bdev1", 00:16:36.817 "uuid": "d86f20c8-f9df-4bbf-bc54-f76da2569e4c", 00:16:36.817 "strip_size_kb": 64, 00:16:36.817 "state": "online", 00:16:36.817 "raid_level": "raid5f", 00:16:36.817 "superblock": true, 00:16:36.817 "num_base_bdevs": 4, 00:16:36.817 "num_base_bdevs_discovered": 4, 00:16:36.817 "num_base_bdevs_operational": 4, 00:16:36.817 "process": { 00:16:36.817 "type": "rebuild", 00:16:36.817 "target": "spare", 00:16:36.817 "progress": { 00:16:36.817 "blocks": 19200, 00:16:36.817 "percent": 10 00:16:36.817 } 00:16:36.817 }, 00:16:36.817 "base_bdevs_list": [ 00:16:36.817 { 00:16:36.817 "name": "spare", 00:16:36.817 "uuid": "6f506b8f-59c3-528b-9056-3968672a27fa", 00:16:36.817 "is_configured": true, 00:16:36.817 "data_offset": 2048, 00:16:36.817 "data_size": 63488 00:16:36.817 }, 00:16:36.817 { 00:16:36.817 "name": "BaseBdev2", 00:16:36.817 "uuid": "d4714b06-0cdb-504e-b57a-3f0823cc3744", 00:16:36.817 "is_configured": true, 00:16:36.817 "data_offset": 2048, 00:16:36.817 "data_size": 63488 00:16:36.817 }, 00:16:36.817 { 00:16:36.817 "name": "BaseBdev3", 00:16:36.817 "uuid": "d96f8ef6-e555-560b-9b1a-a7e9b9294f0f", 00:16:36.817 "is_configured": true, 00:16:36.817 "data_offset": 2048, 00:16:36.817 "data_size": 63488 00:16:36.817 }, 00:16:36.817 { 00:16:36.817 "name": "BaseBdev4", 00:16:36.817 "uuid": "a47ab298-7424-5c42-aba3-cfa009801b4d", 00:16:36.817 "is_configured": true, 00:16:36.817 "data_offset": 2048, 00:16:36.817 "data_size": 63488 00:16:36.817 } 00:16:36.817 ] 00:16:36.817 }' 00:16:36.817 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.817 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:36.817 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.077 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.077 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:37.077 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.077 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.077 [2024-11-19 10:27:50.620856] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:37.077 [2024-11-19 10:27:50.671679] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:37.077 [2024-11-19 10:27:50.671747] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.077 [2024-11-19 10:27:50.671765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:37.077 [2024-11-19 10:27:50.671772] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:37.077 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.077 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:37.077 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.077 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.077 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:37.077 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:37.077 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:37.077 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.077 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.077 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.077 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.077 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.077 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.077 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.077 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.077 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.077 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.077 "name": "raid_bdev1", 00:16:37.077 "uuid": "d86f20c8-f9df-4bbf-bc54-f76da2569e4c", 00:16:37.077 "strip_size_kb": 64, 00:16:37.077 "state": "online", 00:16:37.077 "raid_level": "raid5f", 00:16:37.077 "superblock": true, 00:16:37.077 "num_base_bdevs": 4, 00:16:37.077 "num_base_bdevs_discovered": 3, 00:16:37.077 "num_base_bdevs_operational": 3, 00:16:37.077 "base_bdevs_list": [ 00:16:37.077 { 00:16:37.077 "name": null, 00:16:37.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.077 "is_configured": false, 00:16:37.077 "data_offset": 0, 00:16:37.077 "data_size": 63488 00:16:37.077 }, 00:16:37.077 { 00:16:37.077 "name": "BaseBdev2", 00:16:37.077 "uuid": "d4714b06-0cdb-504e-b57a-3f0823cc3744", 00:16:37.077 "is_configured": true, 00:16:37.077 "data_offset": 2048, 00:16:37.077 "data_size": 63488 00:16:37.077 }, 00:16:37.077 { 00:16:37.077 "name": "BaseBdev3", 00:16:37.077 "uuid": "d96f8ef6-e555-560b-9b1a-a7e9b9294f0f", 00:16:37.077 "is_configured": true, 00:16:37.077 "data_offset": 2048, 00:16:37.077 "data_size": 63488 00:16:37.077 }, 00:16:37.077 { 00:16:37.077 "name": "BaseBdev4", 00:16:37.077 "uuid": "a47ab298-7424-5c42-aba3-cfa009801b4d", 00:16:37.077 "is_configured": true, 00:16:37.077 "data_offset": 2048, 00:16:37.077 "data_size": 63488 00:16:37.077 } 00:16:37.077 ] 00:16:37.077 }' 00:16:37.077 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.077 10:27:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.649 10:27:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:37.649 10:27:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.649 10:27:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:37.649 10:27:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:37.649 10:27:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.649 10:27:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.649 10:27:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.649 10:27:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.649 10:27:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.649 10:27:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.649 10:27:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.649 "name": "raid_bdev1", 00:16:37.649 "uuid": "d86f20c8-f9df-4bbf-bc54-f76da2569e4c", 00:16:37.649 "strip_size_kb": 64, 00:16:37.649 "state": "online", 00:16:37.649 "raid_level": "raid5f", 00:16:37.649 "superblock": true, 00:16:37.649 "num_base_bdevs": 4, 00:16:37.649 "num_base_bdevs_discovered": 3, 00:16:37.649 "num_base_bdevs_operational": 3, 00:16:37.649 "base_bdevs_list": [ 00:16:37.649 { 00:16:37.649 "name": null, 00:16:37.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.649 "is_configured": false, 00:16:37.649 "data_offset": 0, 00:16:37.649 "data_size": 63488 00:16:37.649 }, 00:16:37.649 { 00:16:37.649 "name": "BaseBdev2", 00:16:37.649 "uuid": "d4714b06-0cdb-504e-b57a-3f0823cc3744", 00:16:37.649 "is_configured": true, 00:16:37.649 "data_offset": 2048, 00:16:37.649 "data_size": 63488 00:16:37.649 }, 00:16:37.649 { 00:16:37.649 "name": "BaseBdev3", 00:16:37.649 "uuid": "d96f8ef6-e555-560b-9b1a-a7e9b9294f0f", 00:16:37.649 "is_configured": true, 00:16:37.649 "data_offset": 2048, 00:16:37.649 "data_size": 63488 00:16:37.649 }, 00:16:37.649 { 00:16:37.649 "name": "BaseBdev4", 00:16:37.649 "uuid": "a47ab298-7424-5c42-aba3-cfa009801b4d", 00:16:37.649 "is_configured": true, 00:16:37.649 "data_offset": 2048, 00:16:37.649 "data_size": 63488 00:16:37.649 } 00:16:37.649 ] 00:16:37.649 }' 00:16:37.649 10:27:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.649 10:27:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:37.649 10:27:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.649 10:27:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:37.649 10:27:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:37.649 10:27:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.649 10:27:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.649 10:27:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.649 10:27:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:37.649 10:27:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.649 10:27:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.649 [2024-11-19 10:27:51.267519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:37.649 [2024-11-19 10:27:51.267580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.649 [2024-11-19 10:27:51.267601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:16:37.649 [2024-11-19 10:27:51.267610] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.649 [2024-11-19 10:27:51.268108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.649 [2024-11-19 10:27:51.268137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:37.649 [2024-11-19 10:27:51.268227] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:37.649 [2024-11-19 10:27:51.268246] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:37.649 [2024-11-19 10:27:51.268260] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:37.649 [2024-11-19 10:27:51.268271] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:37.649 BaseBdev1 00:16:37.649 10:27:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.650 10:27:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:38.589 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:38.589 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.589 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.589 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.589 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.589 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:38.589 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.589 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.589 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.589 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.589 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.589 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.589 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.589 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.589 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.589 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.589 "name": "raid_bdev1", 00:16:38.590 "uuid": "d86f20c8-f9df-4bbf-bc54-f76da2569e4c", 00:16:38.590 "strip_size_kb": 64, 00:16:38.590 "state": "online", 00:16:38.590 "raid_level": "raid5f", 00:16:38.590 "superblock": true, 00:16:38.590 "num_base_bdevs": 4, 00:16:38.590 "num_base_bdevs_discovered": 3, 00:16:38.590 "num_base_bdevs_operational": 3, 00:16:38.590 "base_bdevs_list": [ 00:16:38.590 { 00:16:38.590 "name": null, 00:16:38.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.590 "is_configured": false, 00:16:38.590 "data_offset": 0, 00:16:38.590 "data_size": 63488 00:16:38.590 }, 00:16:38.590 { 00:16:38.590 "name": "BaseBdev2", 00:16:38.590 "uuid": "d4714b06-0cdb-504e-b57a-3f0823cc3744", 00:16:38.590 "is_configured": true, 00:16:38.590 "data_offset": 2048, 00:16:38.590 "data_size": 63488 00:16:38.590 }, 00:16:38.590 { 00:16:38.590 "name": "BaseBdev3", 00:16:38.590 "uuid": "d96f8ef6-e555-560b-9b1a-a7e9b9294f0f", 00:16:38.590 "is_configured": true, 00:16:38.590 "data_offset": 2048, 00:16:38.590 "data_size": 63488 00:16:38.590 }, 00:16:38.590 { 00:16:38.590 "name": "BaseBdev4", 00:16:38.590 "uuid": "a47ab298-7424-5c42-aba3-cfa009801b4d", 00:16:38.590 "is_configured": true, 00:16:38.590 "data_offset": 2048, 00:16:38.590 "data_size": 63488 00:16:38.590 } 00:16:38.590 ] 00:16:38.590 }' 00:16:38.590 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.590 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.159 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:39.159 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.159 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:39.159 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:39.159 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.159 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.159 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.159 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.159 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.159 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.159 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.159 "name": "raid_bdev1", 00:16:39.159 "uuid": "d86f20c8-f9df-4bbf-bc54-f76da2569e4c", 00:16:39.159 "strip_size_kb": 64, 00:16:39.159 "state": "online", 00:16:39.159 "raid_level": "raid5f", 00:16:39.159 "superblock": true, 00:16:39.159 "num_base_bdevs": 4, 00:16:39.159 "num_base_bdevs_discovered": 3, 00:16:39.159 "num_base_bdevs_operational": 3, 00:16:39.159 "base_bdevs_list": [ 00:16:39.159 { 00:16:39.159 "name": null, 00:16:39.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.159 "is_configured": false, 00:16:39.159 "data_offset": 0, 00:16:39.159 "data_size": 63488 00:16:39.159 }, 00:16:39.159 { 00:16:39.159 "name": "BaseBdev2", 00:16:39.159 "uuid": "d4714b06-0cdb-504e-b57a-3f0823cc3744", 00:16:39.159 "is_configured": true, 00:16:39.159 "data_offset": 2048, 00:16:39.159 "data_size": 63488 00:16:39.159 }, 00:16:39.159 { 00:16:39.159 "name": "BaseBdev3", 00:16:39.159 "uuid": "d96f8ef6-e555-560b-9b1a-a7e9b9294f0f", 00:16:39.159 "is_configured": true, 00:16:39.159 "data_offset": 2048, 00:16:39.159 "data_size": 63488 00:16:39.159 }, 00:16:39.159 { 00:16:39.159 "name": "BaseBdev4", 00:16:39.159 "uuid": "a47ab298-7424-5c42-aba3-cfa009801b4d", 00:16:39.159 "is_configured": true, 00:16:39.159 "data_offset": 2048, 00:16:39.159 "data_size": 63488 00:16:39.159 } 00:16:39.159 ] 00:16:39.159 }' 00:16:39.159 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.159 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:39.159 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.159 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:39.159 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:39.159 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:39.159 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:39.159 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:39.159 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:39.159 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:39.159 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:39.159 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:39.159 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.159 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.159 [2024-11-19 10:27:52.769051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:39.159 [2024-11-19 10:27:52.769204] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:39.159 [2024-11-19 10:27:52.769228] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:39.159 request: 00:16:39.159 { 00:16:39.159 "base_bdev": "BaseBdev1", 00:16:39.159 "raid_bdev": "raid_bdev1", 00:16:39.159 "method": "bdev_raid_add_base_bdev", 00:16:39.159 "req_id": 1 00:16:39.159 } 00:16:39.159 Got JSON-RPC error response 00:16:39.159 response: 00:16:39.159 { 00:16:39.159 "code": -22, 00:16:39.159 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:39.159 } 00:16:39.159 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:39.159 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:39.159 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:39.159 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:39.159 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:39.159 10:27:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:40.098 10:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:40.098 10:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.098 10:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.099 10:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.099 10:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.099 10:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:40.099 10:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.099 10:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.099 10:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.099 10:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.099 10:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.099 10:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.099 10:27:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.099 10:27:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.099 10:27:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.099 10:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.099 "name": "raid_bdev1", 00:16:40.099 "uuid": "d86f20c8-f9df-4bbf-bc54-f76da2569e4c", 00:16:40.099 "strip_size_kb": 64, 00:16:40.099 "state": "online", 00:16:40.099 "raid_level": "raid5f", 00:16:40.099 "superblock": true, 00:16:40.099 "num_base_bdevs": 4, 00:16:40.099 "num_base_bdevs_discovered": 3, 00:16:40.099 "num_base_bdevs_operational": 3, 00:16:40.099 "base_bdevs_list": [ 00:16:40.099 { 00:16:40.099 "name": null, 00:16:40.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.099 "is_configured": false, 00:16:40.099 "data_offset": 0, 00:16:40.099 "data_size": 63488 00:16:40.099 }, 00:16:40.099 { 00:16:40.099 "name": "BaseBdev2", 00:16:40.099 "uuid": "d4714b06-0cdb-504e-b57a-3f0823cc3744", 00:16:40.099 "is_configured": true, 00:16:40.099 "data_offset": 2048, 00:16:40.099 "data_size": 63488 00:16:40.099 }, 00:16:40.099 { 00:16:40.099 "name": "BaseBdev3", 00:16:40.099 "uuid": "d96f8ef6-e555-560b-9b1a-a7e9b9294f0f", 00:16:40.099 "is_configured": true, 00:16:40.099 "data_offset": 2048, 00:16:40.099 "data_size": 63488 00:16:40.099 }, 00:16:40.099 { 00:16:40.099 "name": "BaseBdev4", 00:16:40.099 "uuid": "a47ab298-7424-5c42-aba3-cfa009801b4d", 00:16:40.099 "is_configured": true, 00:16:40.099 "data_offset": 2048, 00:16:40.099 "data_size": 63488 00:16:40.099 } 00:16:40.099 ] 00:16:40.099 }' 00:16:40.099 10:27:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.099 10:27:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.669 10:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:40.669 10:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.669 10:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:40.669 10:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:40.669 10:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.669 10:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.669 10:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.669 10:27:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.669 10:27:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.669 10:27:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.669 10:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.669 "name": "raid_bdev1", 00:16:40.669 "uuid": "d86f20c8-f9df-4bbf-bc54-f76da2569e4c", 00:16:40.669 "strip_size_kb": 64, 00:16:40.669 "state": "online", 00:16:40.669 "raid_level": "raid5f", 00:16:40.669 "superblock": true, 00:16:40.669 "num_base_bdevs": 4, 00:16:40.669 "num_base_bdevs_discovered": 3, 00:16:40.669 "num_base_bdevs_operational": 3, 00:16:40.669 "base_bdevs_list": [ 00:16:40.669 { 00:16:40.669 "name": null, 00:16:40.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.669 "is_configured": false, 00:16:40.669 "data_offset": 0, 00:16:40.669 "data_size": 63488 00:16:40.669 }, 00:16:40.669 { 00:16:40.669 "name": "BaseBdev2", 00:16:40.669 "uuid": "d4714b06-0cdb-504e-b57a-3f0823cc3744", 00:16:40.669 "is_configured": true, 00:16:40.669 "data_offset": 2048, 00:16:40.669 "data_size": 63488 00:16:40.669 }, 00:16:40.669 { 00:16:40.669 "name": "BaseBdev3", 00:16:40.669 "uuid": "d96f8ef6-e555-560b-9b1a-a7e9b9294f0f", 00:16:40.669 "is_configured": true, 00:16:40.669 "data_offset": 2048, 00:16:40.669 "data_size": 63488 00:16:40.669 }, 00:16:40.669 { 00:16:40.669 "name": "BaseBdev4", 00:16:40.669 "uuid": "a47ab298-7424-5c42-aba3-cfa009801b4d", 00:16:40.669 "is_configured": true, 00:16:40.669 "data_offset": 2048, 00:16:40.669 "data_size": 63488 00:16:40.669 } 00:16:40.669 ] 00:16:40.669 }' 00:16:40.669 10:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.669 10:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:40.669 10:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.669 10:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:40.669 10:27:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 84787 00:16:40.669 10:27:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84787 ']' 00:16:40.669 10:27:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 84787 00:16:40.669 10:27:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:40.669 10:27:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:40.669 10:27:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84787 00:16:40.669 killing process with pid 84787 00:16:40.669 Received shutdown signal, test time was about 60.000000 seconds 00:16:40.669 00:16:40.669 Latency(us) 00:16:40.669 [2024-11-19T10:27:54.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:40.669 [2024-11-19T10:27:54.450Z] =================================================================================================================== 00:16:40.669 [2024-11-19T10:27:54.450Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:40.669 10:27:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:40.669 10:27:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:40.669 10:27:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84787' 00:16:40.669 10:27:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 84787 00:16:40.669 [2024-11-19 10:27:54.421968] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:40.669 [2024-11-19 10:27:54.422100] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:40.669 10:27:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 84787 00:16:40.669 [2024-11-19 10:27:54.422176] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:40.669 [2024-11-19 10:27:54.422188] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:41.239 [2024-11-19 10:27:54.877708] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:42.178 10:27:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:42.178 00:16:42.178 real 0m26.624s 00:16:42.178 user 0m33.438s 00:16:42.178 sys 0m2.966s 00:16:42.178 10:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:42.178 10:27:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.178 ************************************ 00:16:42.178 END TEST raid5f_rebuild_test_sb 00:16:42.178 ************************************ 00:16:42.178 10:27:55 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:16:42.179 10:27:55 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:16:42.179 10:27:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:42.179 10:27:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:42.179 10:27:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:42.438 ************************************ 00:16:42.438 START TEST raid_state_function_test_sb_4k 00:16:42.438 ************************************ 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85599 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85599' 00:16:42.438 Process raid pid: 85599 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85599 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 85599 ']' 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:42.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:42.438 10:27:55 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.438 [2024-11-19 10:27:56.073092] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:16:42.438 [2024-11-19 10:27:56.073213] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:42.698 [2024-11-19 10:27:56.251853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.698 [2024-11-19 10:27:56.357041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:42.957 [2024-11-19 10:27:56.549647] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:42.957 [2024-11-19 10:27:56.549683] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:43.217 10:27:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:43.217 10:27:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:16:43.217 10:27:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:43.217 10:27:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.217 10:27:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.217 [2024-11-19 10:27:56.855129] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:43.217 [2024-11-19 10:27:56.855179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:43.217 [2024-11-19 10:27:56.855189] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:43.217 [2024-11-19 10:27:56.855199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:43.217 10:27:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.217 10:27:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:43.217 10:27:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.217 10:27:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.217 10:27:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:43.217 10:27:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:43.217 10:27:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:43.217 10:27:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.217 10:27:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.217 10:27:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.217 10:27:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.217 10:27:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.217 10:27:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.217 10:27:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.217 10:27:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.217 10:27:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.218 10:27:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.218 "name": "Existed_Raid", 00:16:43.218 "uuid": "88269035-2bdd-400a-a4b2-c4e3f7debec4", 00:16:43.218 "strip_size_kb": 0, 00:16:43.218 "state": "configuring", 00:16:43.218 "raid_level": "raid1", 00:16:43.218 "superblock": true, 00:16:43.218 "num_base_bdevs": 2, 00:16:43.218 "num_base_bdevs_discovered": 0, 00:16:43.218 "num_base_bdevs_operational": 2, 00:16:43.218 "base_bdevs_list": [ 00:16:43.218 { 00:16:43.218 "name": "BaseBdev1", 00:16:43.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.218 "is_configured": false, 00:16:43.218 "data_offset": 0, 00:16:43.218 "data_size": 0 00:16:43.218 }, 00:16:43.218 { 00:16:43.218 "name": "BaseBdev2", 00:16:43.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.218 "is_configured": false, 00:16:43.218 "data_offset": 0, 00:16:43.218 "data_size": 0 00:16:43.218 } 00:16:43.218 ] 00:16:43.218 }' 00:16:43.218 10:27:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.218 10:27:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.788 [2024-11-19 10:27:57.290291] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:43.788 [2024-11-19 10:27:57.290326] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.788 [2024-11-19 10:27:57.302276] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:43.788 [2024-11-19 10:27:57.302317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:43.788 [2024-11-19 10:27:57.302325] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:43.788 [2024-11-19 10:27:57.302352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.788 [2024-11-19 10:27:57.347986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:43.788 BaseBdev1 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.788 [ 00:16:43.788 { 00:16:43.788 "name": "BaseBdev1", 00:16:43.788 "aliases": [ 00:16:43.788 "1df79523-0832-4ec2-b93c-d097879b6952" 00:16:43.788 ], 00:16:43.788 "product_name": "Malloc disk", 00:16:43.788 "block_size": 4096, 00:16:43.788 "num_blocks": 8192, 00:16:43.788 "uuid": "1df79523-0832-4ec2-b93c-d097879b6952", 00:16:43.788 "assigned_rate_limits": { 00:16:43.788 "rw_ios_per_sec": 0, 00:16:43.788 "rw_mbytes_per_sec": 0, 00:16:43.788 "r_mbytes_per_sec": 0, 00:16:43.788 "w_mbytes_per_sec": 0 00:16:43.788 }, 00:16:43.788 "claimed": true, 00:16:43.788 "claim_type": "exclusive_write", 00:16:43.788 "zoned": false, 00:16:43.788 "supported_io_types": { 00:16:43.788 "read": true, 00:16:43.788 "write": true, 00:16:43.788 "unmap": true, 00:16:43.788 "flush": true, 00:16:43.788 "reset": true, 00:16:43.788 "nvme_admin": false, 00:16:43.788 "nvme_io": false, 00:16:43.788 "nvme_io_md": false, 00:16:43.788 "write_zeroes": true, 00:16:43.788 "zcopy": true, 00:16:43.788 "get_zone_info": false, 00:16:43.788 "zone_management": false, 00:16:43.788 "zone_append": false, 00:16:43.788 "compare": false, 00:16:43.788 "compare_and_write": false, 00:16:43.788 "abort": true, 00:16:43.788 "seek_hole": false, 00:16:43.788 "seek_data": false, 00:16:43.788 "copy": true, 00:16:43.788 "nvme_iov_md": false 00:16:43.788 }, 00:16:43.788 "memory_domains": [ 00:16:43.788 { 00:16:43.788 "dma_device_id": "system", 00:16:43.788 "dma_device_type": 1 00:16:43.788 }, 00:16:43.788 { 00:16:43.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:43.788 "dma_device_type": 2 00:16:43.788 } 00:16:43.788 ], 00:16:43.788 "driver_specific": {} 00:16:43.788 } 00:16:43.788 ] 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:43.788 "name": "Existed_Raid", 00:16:43.788 "uuid": "6980955c-a296-4a3a-89f0-353b4247f18d", 00:16:43.788 "strip_size_kb": 0, 00:16:43.788 "state": "configuring", 00:16:43.788 "raid_level": "raid1", 00:16:43.788 "superblock": true, 00:16:43.788 "num_base_bdevs": 2, 00:16:43.788 "num_base_bdevs_discovered": 1, 00:16:43.788 "num_base_bdevs_operational": 2, 00:16:43.788 "base_bdevs_list": [ 00:16:43.788 { 00:16:43.788 "name": "BaseBdev1", 00:16:43.788 "uuid": "1df79523-0832-4ec2-b93c-d097879b6952", 00:16:43.788 "is_configured": true, 00:16:43.788 "data_offset": 256, 00:16:43.788 "data_size": 7936 00:16:43.788 }, 00:16:43.788 { 00:16:43.788 "name": "BaseBdev2", 00:16:43.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:43.788 "is_configured": false, 00:16:43.788 "data_offset": 0, 00:16:43.788 "data_size": 0 00:16:43.788 } 00:16:43.788 ] 00:16:43.788 }' 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:43.788 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.048 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:44.048 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.048 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.048 [2024-11-19 10:27:57.815238] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:44.048 [2024-11-19 10:27:57.815283] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:44.048 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.048 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:44.048 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.048 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.048 [2024-11-19 10:27:57.827280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:44.308 [2024-11-19 10:27:57.829020] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:44.308 [2024-11-19 10:27:57.829062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:44.308 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.308 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:44.308 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:44.308 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:44.308 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:44.308 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:44.308 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:44.308 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:44.308 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:44.308 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.308 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.308 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.308 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.308 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.308 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.308 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.308 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.308 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.308 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.308 "name": "Existed_Raid", 00:16:44.308 "uuid": "55beb195-a5da-464a-b3ef-5559b192f2ee", 00:16:44.308 "strip_size_kb": 0, 00:16:44.308 "state": "configuring", 00:16:44.308 "raid_level": "raid1", 00:16:44.308 "superblock": true, 00:16:44.308 "num_base_bdevs": 2, 00:16:44.308 "num_base_bdevs_discovered": 1, 00:16:44.308 "num_base_bdevs_operational": 2, 00:16:44.308 "base_bdevs_list": [ 00:16:44.308 { 00:16:44.308 "name": "BaseBdev1", 00:16:44.308 "uuid": "1df79523-0832-4ec2-b93c-d097879b6952", 00:16:44.308 "is_configured": true, 00:16:44.308 "data_offset": 256, 00:16:44.308 "data_size": 7936 00:16:44.308 }, 00:16:44.308 { 00:16:44.308 "name": "BaseBdev2", 00:16:44.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.308 "is_configured": false, 00:16:44.308 "data_offset": 0, 00:16:44.308 "data_size": 0 00:16:44.308 } 00:16:44.308 ] 00:16:44.308 }' 00:16:44.308 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.308 10:27:57 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.568 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:16:44.568 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.568 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.568 [2024-11-19 10:27:58.270621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:44.568 [2024-11-19 10:27:58.270870] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:44.568 [2024-11-19 10:27:58.270890] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:44.568 [2024-11-19 10:27:58.271169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:44.568 [2024-11-19 10:27:58.271344] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:44.568 [2024-11-19 10:27:58.271363] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:44.568 BaseBdev2 00:16:44.568 [2024-11-19 10:27:58.271518] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.569 [ 00:16:44.569 { 00:16:44.569 "name": "BaseBdev2", 00:16:44.569 "aliases": [ 00:16:44.569 "6bab817e-83d8-4960-8ca0-467f6c6179e7" 00:16:44.569 ], 00:16:44.569 "product_name": "Malloc disk", 00:16:44.569 "block_size": 4096, 00:16:44.569 "num_blocks": 8192, 00:16:44.569 "uuid": "6bab817e-83d8-4960-8ca0-467f6c6179e7", 00:16:44.569 "assigned_rate_limits": { 00:16:44.569 "rw_ios_per_sec": 0, 00:16:44.569 "rw_mbytes_per_sec": 0, 00:16:44.569 "r_mbytes_per_sec": 0, 00:16:44.569 "w_mbytes_per_sec": 0 00:16:44.569 }, 00:16:44.569 "claimed": true, 00:16:44.569 "claim_type": "exclusive_write", 00:16:44.569 "zoned": false, 00:16:44.569 "supported_io_types": { 00:16:44.569 "read": true, 00:16:44.569 "write": true, 00:16:44.569 "unmap": true, 00:16:44.569 "flush": true, 00:16:44.569 "reset": true, 00:16:44.569 "nvme_admin": false, 00:16:44.569 "nvme_io": false, 00:16:44.569 "nvme_io_md": false, 00:16:44.569 "write_zeroes": true, 00:16:44.569 "zcopy": true, 00:16:44.569 "get_zone_info": false, 00:16:44.569 "zone_management": false, 00:16:44.569 "zone_append": false, 00:16:44.569 "compare": false, 00:16:44.569 "compare_and_write": false, 00:16:44.569 "abort": true, 00:16:44.569 "seek_hole": false, 00:16:44.569 "seek_data": false, 00:16:44.569 "copy": true, 00:16:44.569 "nvme_iov_md": false 00:16:44.569 }, 00:16:44.569 "memory_domains": [ 00:16:44.569 { 00:16:44.569 "dma_device_id": "system", 00:16:44.569 "dma_device_type": 1 00:16:44.569 }, 00:16:44.569 { 00:16:44.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:44.569 "dma_device_type": 2 00:16:44.569 } 00:16:44.569 ], 00:16:44.569 "driver_specific": {} 00:16:44.569 } 00:16:44.569 ] 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.569 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.829 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.829 "name": "Existed_Raid", 00:16:44.829 "uuid": "55beb195-a5da-464a-b3ef-5559b192f2ee", 00:16:44.829 "strip_size_kb": 0, 00:16:44.829 "state": "online", 00:16:44.829 "raid_level": "raid1", 00:16:44.829 "superblock": true, 00:16:44.829 "num_base_bdevs": 2, 00:16:44.829 "num_base_bdevs_discovered": 2, 00:16:44.829 "num_base_bdevs_operational": 2, 00:16:44.829 "base_bdevs_list": [ 00:16:44.829 { 00:16:44.829 "name": "BaseBdev1", 00:16:44.829 "uuid": "1df79523-0832-4ec2-b93c-d097879b6952", 00:16:44.829 "is_configured": true, 00:16:44.829 "data_offset": 256, 00:16:44.829 "data_size": 7936 00:16:44.829 }, 00:16:44.829 { 00:16:44.829 "name": "BaseBdev2", 00:16:44.829 "uuid": "6bab817e-83d8-4960-8ca0-467f6c6179e7", 00:16:44.829 "is_configured": true, 00:16:44.829 "data_offset": 256, 00:16:44.829 "data_size": 7936 00:16:44.829 } 00:16:44.829 ] 00:16:44.829 }' 00:16:44.829 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.829 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:45.088 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:45.088 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:45.088 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:45.088 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:45.088 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:45.088 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:45.088 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:45.088 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:45.088 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.088 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:45.088 [2024-11-19 10:27:58.774021] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:45.088 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.088 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:45.088 "name": "Existed_Raid", 00:16:45.088 "aliases": [ 00:16:45.088 "55beb195-a5da-464a-b3ef-5559b192f2ee" 00:16:45.088 ], 00:16:45.088 "product_name": "Raid Volume", 00:16:45.088 "block_size": 4096, 00:16:45.088 "num_blocks": 7936, 00:16:45.088 "uuid": "55beb195-a5da-464a-b3ef-5559b192f2ee", 00:16:45.088 "assigned_rate_limits": { 00:16:45.088 "rw_ios_per_sec": 0, 00:16:45.088 "rw_mbytes_per_sec": 0, 00:16:45.088 "r_mbytes_per_sec": 0, 00:16:45.088 "w_mbytes_per_sec": 0 00:16:45.088 }, 00:16:45.088 "claimed": false, 00:16:45.088 "zoned": false, 00:16:45.088 "supported_io_types": { 00:16:45.088 "read": true, 00:16:45.088 "write": true, 00:16:45.088 "unmap": false, 00:16:45.088 "flush": false, 00:16:45.088 "reset": true, 00:16:45.088 "nvme_admin": false, 00:16:45.088 "nvme_io": false, 00:16:45.088 "nvme_io_md": false, 00:16:45.088 "write_zeroes": true, 00:16:45.088 "zcopy": false, 00:16:45.088 "get_zone_info": false, 00:16:45.088 "zone_management": false, 00:16:45.088 "zone_append": false, 00:16:45.088 "compare": false, 00:16:45.088 "compare_and_write": false, 00:16:45.088 "abort": false, 00:16:45.088 "seek_hole": false, 00:16:45.088 "seek_data": false, 00:16:45.088 "copy": false, 00:16:45.088 "nvme_iov_md": false 00:16:45.088 }, 00:16:45.088 "memory_domains": [ 00:16:45.088 { 00:16:45.088 "dma_device_id": "system", 00:16:45.088 "dma_device_type": 1 00:16:45.088 }, 00:16:45.088 { 00:16:45.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.088 "dma_device_type": 2 00:16:45.088 }, 00:16:45.088 { 00:16:45.088 "dma_device_id": "system", 00:16:45.088 "dma_device_type": 1 00:16:45.088 }, 00:16:45.088 { 00:16:45.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:45.088 "dma_device_type": 2 00:16:45.088 } 00:16:45.088 ], 00:16:45.088 "driver_specific": { 00:16:45.088 "raid": { 00:16:45.088 "uuid": "55beb195-a5da-464a-b3ef-5559b192f2ee", 00:16:45.089 "strip_size_kb": 0, 00:16:45.089 "state": "online", 00:16:45.089 "raid_level": "raid1", 00:16:45.089 "superblock": true, 00:16:45.089 "num_base_bdevs": 2, 00:16:45.089 "num_base_bdevs_discovered": 2, 00:16:45.089 "num_base_bdevs_operational": 2, 00:16:45.089 "base_bdevs_list": [ 00:16:45.089 { 00:16:45.089 "name": "BaseBdev1", 00:16:45.089 "uuid": "1df79523-0832-4ec2-b93c-d097879b6952", 00:16:45.089 "is_configured": true, 00:16:45.089 "data_offset": 256, 00:16:45.089 "data_size": 7936 00:16:45.089 }, 00:16:45.089 { 00:16:45.089 "name": "BaseBdev2", 00:16:45.089 "uuid": "6bab817e-83d8-4960-8ca0-467f6c6179e7", 00:16:45.089 "is_configured": true, 00:16:45.089 "data_offset": 256, 00:16:45.089 "data_size": 7936 00:16:45.089 } 00:16:45.089 ] 00:16:45.089 } 00:16:45.089 } 00:16:45.089 }' 00:16:45.089 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:45.089 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:45.089 BaseBdev2' 00:16:45.089 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:45.348 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:45.348 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:45.348 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:45.348 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:45.348 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.348 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:45.348 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.348 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:45.348 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:45.348 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:45.348 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:45.348 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:45.348 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.348 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:45.348 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.348 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:45.348 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:45.348 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:45.348 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.348 10:27:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:45.348 [2024-11-19 10:27:58.969481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:45.348 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.348 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:45.348 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:45.348 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:45.348 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:45.348 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:45.348 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:45.348 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:45.348 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.348 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:45.348 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:45.348 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:45.348 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.348 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.348 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.348 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.348 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:45.348 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.348 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.348 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:45.348 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.348 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.348 "name": "Existed_Raid", 00:16:45.348 "uuid": "55beb195-a5da-464a-b3ef-5559b192f2ee", 00:16:45.348 "strip_size_kb": 0, 00:16:45.348 "state": "online", 00:16:45.348 "raid_level": "raid1", 00:16:45.348 "superblock": true, 00:16:45.348 "num_base_bdevs": 2, 00:16:45.348 "num_base_bdevs_discovered": 1, 00:16:45.348 "num_base_bdevs_operational": 1, 00:16:45.348 "base_bdevs_list": [ 00:16:45.348 { 00:16:45.348 "name": null, 00:16:45.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.348 "is_configured": false, 00:16:45.348 "data_offset": 0, 00:16:45.348 "data_size": 7936 00:16:45.348 }, 00:16:45.348 { 00:16:45.348 "name": "BaseBdev2", 00:16:45.348 "uuid": "6bab817e-83d8-4960-8ca0-467f6c6179e7", 00:16:45.348 "is_configured": true, 00:16:45.348 "data_offset": 256, 00:16:45.348 "data_size": 7936 00:16:45.348 } 00:16:45.348 ] 00:16:45.348 }' 00:16:45.348 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.348 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:45.916 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:45.916 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:45.916 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.916 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:45.916 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.916 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:45.916 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.917 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:45.917 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:45.917 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:45.917 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.917 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:45.917 [2024-11-19 10:27:59.547160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:45.917 [2024-11-19 10:27:59.547259] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:45.917 [2024-11-19 10:27:59.634343] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:45.917 [2024-11-19 10:27:59.634413] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:45.917 [2024-11-19 10:27:59.634424] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:45.917 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.917 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:45.917 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:45.917 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:45.917 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.917 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.917 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:45.917 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.917 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:45.917 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:45.917 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:45.917 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85599 00:16:45.917 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 85599 ']' 00:16:45.917 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 85599 00:16:45.917 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:16:45.917 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:45.917 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85599 00:16:46.176 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:46.176 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:46.176 killing process with pid 85599 00:16:46.176 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85599' 00:16:46.176 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 85599 00:16:46.176 [2024-11-19 10:27:59.725957] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:46.176 10:27:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 85599 00:16:46.176 [2024-11-19 10:27:59.741468] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:47.124 10:28:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:16:47.124 00:16:47.124 real 0m4.797s 00:16:47.124 user 0m6.922s 00:16:47.124 sys 0m0.852s 00:16:47.124 ************************************ 00:16:47.124 END TEST raid_state_function_test_sb_4k 00:16:47.124 ************************************ 00:16:47.124 10:28:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:47.124 10:28:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.124 10:28:00 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:16:47.124 10:28:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:47.124 10:28:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:47.124 10:28:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:47.124 ************************************ 00:16:47.124 START TEST raid_superblock_test_4k 00:16:47.124 ************************************ 00:16:47.124 10:28:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:16:47.124 10:28:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:47.124 10:28:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:47.124 10:28:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:47.124 10:28:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:47.124 10:28:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:47.124 10:28:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:47.124 10:28:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:47.124 10:28:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:47.124 10:28:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:47.124 10:28:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:47.124 10:28:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:47.124 10:28:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:47.124 10:28:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:47.124 10:28:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:47.124 10:28:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:47.124 10:28:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=85846 00:16:47.124 10:28:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:47.124 10:28:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 85846 00:16:47.124 10:28:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 85846 ']' 00:16:47.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.124 10:28:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.124 10:28:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:47.124 10:28:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.124 10:28:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:47.124 10:28:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.388 [2024-11-19 10:28:00.943182] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:16:47.388 [2024-11-19 10:28:00.943376] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85846 ] 00:16:47.388 [2024-11-19 10:28:01.117364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.649 [2024-11-19 10:28:01.229252] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.649 [2024-11-19 10:28:01.414682] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.649 [2024-11-19 10:28:01.414794] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:48.218 10:28:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:48.218 10:28:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:16:48.218 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:48.218 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:48.218 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:48.218 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:48.218 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:48.218 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:48.218 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:48.218 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:48.218 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:16:48.218 10:28:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.218 10:28:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.218 malloc1 00:16:48.218 10:28:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.218 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:48.218 10:28:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.218 10:28:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.218 [2024-11-19 10:28:01.790472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:48.218 [2024-11-19 10:28:01.790551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.218 [2024-11-19 10:28:01.790575] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:48.218 [2024-11-19 10:28:01.790584] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.218 [2024-11-19 10:28:01.792602] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.218 [2024-11-19 10:28:01.792698] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:48.218 pt1 00:16:48.218 10:28:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.218 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:48.218 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:48.218 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:48.218 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:48.218 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:48.218 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:48.218 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:48.218 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:48.218 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:16:48.218 10:28:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.219 10:28:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.219 malloc2 00:16:48.219 10:28:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.219 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:48.219 10:28:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.219 10:28:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.219 [2024-11-19 10:28:01.842440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:48.219 [2024-11-19 10:28:01.842545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.219 [2024-11-19 10:28:01.842582] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:48.219 [2024-11-19 10:28:01.842609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.219 [2024-11-19 10:28:01.844630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.219 [2024-11-19 10:28:01.844700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:48.219 pt2 00:16:48.219 10:28:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.219 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:48.219 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:48.219 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:48.219 10:28:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.219 10:28:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.219 [2024-11-19 10:28:01.854484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:48.219 [2024-11-19 10:28:01.856273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:48.219 [2024-11-19 10:28:01.856494] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:48.219 [2024-11-19 10:28:01.856543] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:48.219 [2024-11-19 10:28:01.856778] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:48.219 [2024-11-19 10:28:01.856979] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:48.219 [2024-11-19 10:28:01.857042] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:48.219 [2024-11-19 10:28:01.857234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.219 10:28:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.219 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:48.219 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.219 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.219 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:48.219 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:48.219 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:48.219 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.219 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.219 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.219 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.219 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.219 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.219 10:28:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.219 10:28:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.219 10:28:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.219 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.219 "name": "raid_bdev1", 00:16:48.219 "uuid": "49252179-0d55-48e1-92bb-5d6509bc4057", 00:16:48.219 "strip_size_kb": 0, 00:16:48.219 "state": "online", 00:16:48.219 "raid_level": "raid1", 00:16:48.219 "superblock": true, 00:16:48.219 "num_base_bdevs": 2, 00:16:48.219 "num_base_bdevs_discovered": 2, 00:16:48.219 "num_base_bdevs_operational": 2, 00:16:48.219 "base_bdevs_list": [ 00:16:48.219 { 00:16:48.219 "name": "pt1", 00:16:48.219 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:48.219 "is_configured": true, 00:16:48.219 "data_offset": 256, 00:16:48.219 "data_size": 7936 00:16:48.219 }, 00:16:48.219 { 00:16:48.219 "name": "pt2", 00:16:48.219 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:48.219 "is_configured": true, 00:16:48.219 "data_offset": 256, 00:16:48.219 "data_size": 7936 00:16:48.219 } 00:16:48.219 ] 00:16:48.219 }' 00:16:48.219 10:28:01 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.219 10:28:01 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.790 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:48.790 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:48.790 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:48.790 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:48.790 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:48.790 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:48.790 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:48.790 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:48.790 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.790 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.790 [2024-11-19 10:28:02.313924] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:48.790 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.790 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:48.790 "name": "raid_bdev1", 00:16:48.790 "aliases": [ 00:16:48.790 "49252179-0d55-48e1-92bb-5d6509bc4057" 00:16:48.790 ], 00:16:48.790 "product_name": "Raid Volume", 00:16:48.790 "block_size": 4096, 00:16:48.790 "num_blocks": 7936, 00:16:48.790 "uuid": "49252179-0d55-48e1-92bb-5d6509bc4057", 00:16:48.790 "assigned_rate_limits": { 00:16:48.790 "rw_ios_per_sec": 0, 00:16:48.790 "rw_mbytes_per_sec": 0, 00:16:48.790 "r_mbytes_per_sec": 0, 00:16:48.790 "w_mbytes_per_sec": 0 00:16:48.790 }, 00:16:48.790 "claimed": false, 00:16:48.790 "zoned": false, 00:16:48.790 "supported_io_types": { 00:16:48.790 "read": true, 00:16:48.790 "write": true, 00:16:48.790 "unmap": false, 00:16:48.790 "flush": false, 00:16:48.790 "reset": true, 00:16:48.790 "nvme_admin": false, 00:16:48.790 "nvme_io": false, 00:16:48.790 "nvme_io_md": false, 00:16:48.790 "write_zeroes": true, 00:16:48.790 "zcopy": false, 00:16:48.790 "get_zone_info": false, 00:16:48.790 "zone_management": false, 00:16:48.790 "zone_append": false, 00:16:48.790 "compare": false, 00:16:48.790 "compare_and_write": false, 00:16:48.790 "abort": false, 00:16:48.790 "seek_hole": false, 00:16:48.790 "seek_data": false, 00:16:48.790 "copy": false, 00:16:48.790 "nvme_iov_md": false 00:16:48.790 }, 00:16:48.790 "memory_domains": [ 00:16:48.790 { 00:16:48.790 "dma_device_id": "system", 00:16:48.790 "dma_device_type": 1 00:16:48.790 }, 00:16:48.790 { 00:16:48.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.790 "dma_device_type": 2 00:16:48.790 }, 00:16:48.790 { 00:16:48.790 "dma_device_id": "system", 00:16:48.790 "dma_device_type": 1 00:16:48.790 }, 00:16:48.790 { 00:16:48.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:48.790 "dma_device_type": 2 00:16:48.790 } 00:16:48.790 ], 00:16:48.790 "driver_specific": { 00:16:48.790 "raid": { 00:16:48.790 "uuid": "49252179-0d55-48e1-92bb-5d6509bc4057", 00:16:48.790 "strip_size_kb": 0, 00:16:48.790 "state": "online", 00:16:48.790 "raid_level": "raid1", 00:16:48.790 "superblock": true, 00:16:48.790 "num_base_bdevs": 2, 00:16:48.790 "num_base_bdevs_discovered": 2, 00:16:48.790 "num_base_bdevs_operational": 2, 00:16:48.790 "base_bdevs_list": [ 00:16:48.790 { 00:16:48.790 "name": "pt1", 00:16:48.790 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:48.790 "is_configured": true, 00:16:48.790 "data_offset": 256, 00:16:48.790 "data_size": 7936 00:16:48.790 }, 00:16:48.790 { 00:16:48.790 "name": "pt2", 00:16:48.790 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:48.790 "is_configured": true, 00:16:48.790 "data_offset": 256, 00:16:48.790 "data_size": 7936 00:16:48.790 } 00:16:48.790 ] 00:16:48.790 } 00:16:48.790 } 00:16:48.790 }' 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:48.791 pt2' 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.791 [2024-11-19 10:28:02.501547] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=49252179-0d55-48e1-92bb-5d6509bc4057 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 49252179-0d55-48e1-92bb-5d6509bc4057 ']' 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.791 [2024-11-19 10:28:02.545231] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:48.791 [2024-11-19 10:28:02.545253] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:48.791 [2024-11-19 10:28:02.545319] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:48.791 [2024-11-19 10:28:02.545371] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:48.791 [2024-11-19 10:28:02.545385] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.791 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.052 [2024-11-19 10:28:02.685038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:49.052 [2024-11-19 10:28:02.686754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:49.052 [2024-11-19 10:28:02.686891] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:49.052 [2024-11-19 10:28:02.686943] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:49.052 [2024-11-19 10:28:02.686957] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:49.052 [2024-11-19 10:28:02.686967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:49.052 request: 00:16:49.052 { 00:16:49.052 "name": "raid_bdev1", 00:16:49.052 "raid_level": "raid1", 00:16:49.052 "base_bdevs": [ 00:16:49.052 "malloc1", 00:16:49.052 "malloc2" 00:16:49.052 ], 00:16:49.052 "superblock": false, 00:16:49.052 "method": "bdev_raid_create", 00:16:49.052 "req_id": 1 00:16:49.052 } 00:16:49.052 Got JSON-RPC error response 00:16:49.052 response: 00:16:49.052 { 00:16:49.052 "code": -17, 00:16:49.052 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:49.052 } 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:49.052 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.053 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.053 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.053 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:49.053 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:49.053 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:49.053 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.053 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.053 [2024-11-19 10:28:02.752877] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:49.053 [2024-11-19 10:28:02.752964] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.053 [2024-11-19 10:28:02.753001] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:49.053 [2024-11-19 10:28:02.753055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.053 [2024-11-19 10:28:02.755069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.053 [2024-11-19 10:28:02.755136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:49.053 [2024-11-19 10:28:02.755226] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:49.053 [2024-11-19 10:28:02.755312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:49.053 pt1 00:16:49.053 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.053 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:49.053 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.053 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.053 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:49.053 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:49.053 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:49.053 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.053 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.053 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.053 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.053 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.053 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.053 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.053 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.053 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.053 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.053 "name": "raid_bdev1", 00:16:49.053 "uuid": "49252179-0d55-48e1-92bb-5d6509bc4057", 00:16:49.053 "strip_size_kb": 0, 00:16:49.053 "state": "configuring", 00:16:49.053 "raid_level": "raid1", 00:16:49.053 "superblock": true, 00:16:49.053 "num_base_bdevs": 2, 00:16:49.053 "num_base_bdevs_discovered": 1, 00:16:49.053 "num_base_bdevs_operational": 2, 00:16:49.053 "base_bdevs_list": [ 00:16:49.053 { 00:16:49.053 "name": "pt1", 00:16:49.053 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:49.053 "is_configured": true, 00:16:49.053 "data_offset": 256, 00:16:49.053 "data_size": 7936 00:16:49.053 }, 00:16:49.053 { 00:16:49.053 "name": null, 00:16:49.053 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:49.053 "is_configured": false, 00:16:49.053 "data_offset": 256, 00:16:49.053 "data_size": 7936 00:16:49.053 } 00:16:49.053 ] 00:16:49.053 }' 00:16:49.053 10:28:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.053 10:28:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.623 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:49.623 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:49.623 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:49.623 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:49.623 10:28:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.623 10:28:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.623 [2024-11-19 10:28:03.176150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:49.623 [2024-11-19 10:28:03.176203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.623 [2024-11-19 10:28:03.176220] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:49.623 [2024-11-19 10:28:03.176229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.623 [2024-11-19 10:28:03.176591] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.623 [2024-11-19 10:28:03.176609] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:49.623 [2024-11-19 10:28:03.176670] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:49.623 [2024-11-19 10:28:03.176689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:49.623 [2024-11-19 10:28:03.176792] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:49.623 [2024-11-19 10:28:03.176803] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:49.623 [2024-11-19 10:28:03.177029] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:49.624 [2024-11-19 10:28:03.177180] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:49.624 [2024-11-19 10:28:03.177189] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:49.624 [2024-11-19 10:28:03.177308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:49.624 pt2 00:16:49.624 10:28:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.624 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:49.624 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:49.624 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:49.624 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:49.624 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:49.624 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:49.624 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:49.624 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:49.624 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.624 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.624 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.624 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.624 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.624 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.624 10:28:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.624 10:28:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.624 10:28:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.624 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.624 "name": "raid_bdev1", 00:16:49.624 "uuid": "49252179-0d55-48e1-92bb-5d6509bc4057", 00:16:49.624 "strip_size_kb": 0, 00:16:49.624 "state": "online", 00:16:49.624 "raid_level": "raid1", 00:16:49.624 "superblock": true, 00:16:49.624 "num_base_bdevs": 2, 00:16:49.624 "num_base_bdevs_discovered": 2, 00:16:49.624 "num_base_bdevs_operational": 2, 00:16:49.624 "base_bdevs_list": [ 00:16:49.624 { 00:16:49.624 "name": "pt1", 00:16:49.624 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:49.624 "is_configured": true, 00:16:49.624 "data_offset": 256, 00:16:49.624 "data_size": 7936 00:16:49.624 }, 00:16:49.624 { 00:16:49.624 "name": "pt2", 00:16:49.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:49.624 "is_configured": true, 00:16:49.624 "data_offset": 256, 00:16:49.624 "data_size": 7936 00:16:49.624 } 00:16:49.624 ] 00:16:49.624 }' 00:16:49.624 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.624 10:28:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.884 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:49.884 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:49.884 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:49.884 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:49.884 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:49.884 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:49.884 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:49.884 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:49.884 10:28:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.884 10:28:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.884 [2024-11-19 10:28:03.615681] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:49.884 10:28:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.885 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:49.885 "name": "raid_bdev1", 00:16:49.885 "aliases": [ 00:16:49.885 "49252179-0d55-48e1-92bb-5d6509bc4057" 00:16:49.885 ], 00:16:49.885 "product_name": "Raid Volume", 00:16:49.885 "block_size": 4096, 00:16:49.885 "num_blocks": 7936, 00:16:49.885 "uuid": "49252179-0d55-48e1-92bb-5d6509bc4057", 00:16:49.885 "assigned_rate_limits": { 00:16:49.885 "rw_ios_per_sec": 0, 00:16:49.885 "rw_mbytes_per_sec": 0, 00:16:49.885 "r_mbytes_per_sec": 0, 00:16:49.885 "w_mbytes_per_sec": 0 00:16:49.885 }, 00:16:49.885 "claimed": false, 00:16:49.885 "zoned": false, 00:16:49.885 "supported_io_types": { 00:16:49.885 "read": true, 00:16:49.885 "write": true, 00:16:49.885 "unmap": false, 00:16:49.885 "flush": false, 00:16:49.885 "reset": true, 00:16:49.885 "nvme_admin": false, 00:16:49.885 "nvme_io": false, 00:16:49.885 "nvme_io_md": false, 00:16:49.885 "write_zeroes": true, 00:16:49.885 "zcopy": false, 00:16:49.885 "get_zone_info": false, 00:16:49.885 "zone_management": false, 00:16:49.885 "zone_append": false, 00:16:49.885 "compare": false, 00:16:49.885 "compare_and_write": false, 00:16:49.885 "abort": false, 00:16:49.885 "seek_hole": false, 00:16:49.885 "seek_data": false, 00:16:49.885 "copy": false, 00:16:49.885 "nvme_iov_md": false 00:16:49.885 }, 00:16:49.885 "memory_domains": [ 00:16:49.885 { 00:16:49.885 "dma_device_id": "system", 00:16:49.885 "dma_device_type": 1 00:16:49.885 }, 00:16:49.885 { 00:16:49.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.885 "dma_device_type": 2 00:16:49.885 }, 00:16:49.885 { 00:16:49.885 "dma_device_id": "system", 00:16:49.885 "dma_device_type": 1 00:16:49.885 }, 00:16:49.885 { 00:16:49.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.885 "dma_device_type": 2 00:16:49.885 } 00:16:49.885 ], 00:16:49.885 "driver_specific": { 00:16:49.885 "raid": { 00:16:49.885 "uuid": "49252179-0d55-48e1-92bb-5d6509bc4057", 00:16:49.885 "strip_size_kb": 0, 00:16:49.885 "state": "online", 00:16:49.885 "raid_level": "raid1", 00:16:49.885 "superblock": true, 00:16:49.885 "num_base_bdevs": 2, 00:16:49.885 "num_base_bdevs_discovered": 2, 00:16:49.885 "num_base_bdevs_operational": 2, 00:16:49.885 "base_bdevs_list": [ 00:16:49.885 { 00:16:49.885 "name": "pt1", 00:16:49.885 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:49.885 "is_configured": true, 00:16:49.885 "data_offset": 256, 00:16:49.885 "data_size": 7936 00:16:49.885 }, 00:16:49.885 { 00:16:49.885 "name": "pt2", 00:16:49.885 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:49.885 "is_configured": true, 00:16:49.885 "data_offset": 256, 00:16:49.885 "data_size": 7936 00:16:49.885 } 00:16:49.885 ] 00:16:49.885 } 00:16:49.885 } 00:16:49.885 }' 00:16:49.885 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:50.145 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:50.145 pt2' 00:16:50.145 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.145 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:50.145 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.145 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:50.145 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.145 10:28:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.145 10:28:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.146 [2024-11-19 10:28:03.827299] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 49252179-0d55-48e1-92bb-5d6509bc4057 '!=' 49252179-0d55-48e1-92bb-5d6509bc4057 ']' 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.146 [2024-11-19 10:28:03.871085] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.146 "name": "raid_bdev1", 00:16:50.146 "uuid": "49252179-0d55-48e1-92bb-5d6509bc4057", 00:16:50.146 "strip_size_kb": 0, 00:16:50.146 "state": "online", 00:16:50.146 "raid_level": "raid1", 00:16:50.146 "superblock": true, 00:16:50.146 "num_base_bdevs": 2, 00:16:50.146 "num_base_bdevs_discovered": 1, 00:16:50.146 "num_base_bdevs_operational": 1, 00:16:50.146 "base_bdevs_list": [ 00:16:50.146 { 00:16:50.146 "name": null, 00:16:50.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.146 "is_configured": false, 00:16:50.146 "data_offset": 0, 00:16:50.146 "data_size": 7936 00:16:50.146 }, 00:16:50.146 { 00:16:50.146 "name": "pt2", 00:16:50.146 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.146 "is_configured": true, 00:16:50.146 "data_offset": 256, 00:16:50.146 "data_size": 7936 00:16:50.146 } 00:16:50.146 ] 00:16:50.146 }' 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.146 10:28:03 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.717 [2024-11-19 10:28:04.306301] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:50.717 [2024-11-19 10:28:04.306322] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:50.717 [2024-11-19 10:28:04.306376] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:50.717 [2024-11-19 10:28:04.306413] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:50.717 [2024-11-19 10:28:04.306423] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.717 [2024-11-19 10:28:04.378175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:50.717 [2024-11-19 10:28:04.378272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.717 [2024-11-19 10:28:04.378307] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:50.717 [2024-11-19 10:28:04.378318] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.717 [2024-11-19 10:28:04.380394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.717 [2024-11-19 10:28:04.380433] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:50.717 [2024-11-19 10:28:04.380502] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:50.717 [2024-11-19 10:28:04.380549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:50.717 [2024-11-19 10:28:04.380665] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:50.717 [2024-11-19 10:28:04.380677] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:50.717 [2024-11-19 10:28:04.380901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:50.717 [2024-11-19 10:28:04.381083] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:50.717 [2024-11-19 10:28:04.381093] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:50.717 [2024-11-19 10:28:04.381237] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.717 pt2 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.717 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.718 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.718 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.718 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.718 10:28:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.718 10:28:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.718 10:28:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.718 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.718 "name": "raid_bdev1", 00:16:50.718 "uuid": "49252179-0d55-48e1-92bb-5d6509bc4057", 00:16:50.718 "strip_size_kb": 0, 00:16:50.718 "state": "online", 00:16:50.718 "raid_level": "raid1", 00:16:50.718 "superblock": true, 00:16:50.718 "num_base_bdevs": 2, 00:16:50.718 "num_base_bdevs_discovered": 1, 00:16:50.718 "num_base_bdevs_operational": 1, 00:16:50.718 "base_bdevs_list": [ 00:16:50.718 { 00:16:50.718 "name": null, 00:16:50.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.718 "is_configured": false, 00:16:50.718 "data_offset": 256, 00:16:50.718 "data_size": 7936 00:16:50.718 }, 00:16:50.718 { 00:16:50.718 "name": "pt2", 00:16:50.718 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:50.718 "is_configured": true, 00:16:50.718 "data_offset": 256, 00:16:50.718 "data_size": 7936 00:16:50.718 } 00:16:50.718 ] 00:16:50.718 }' 00:16:50.718 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.718 10:28:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.288 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:51.288 10:28:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.288 10:28:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.288 [2024-11-19 10:28:04.801432] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:51.288 [2024-11-19 10:28:04.801502] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:51.288 [2024-11-19 10:28:04.801579] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:51.288 [2024-11-19 10:28:04.801664] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:51.288 [2024-11-19 10:28:04.801707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:51.288 10:28:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.288 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.289 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:51.289 10:28:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.289 10:28:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.289 10:28:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.289 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:51.289 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:51.289 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:51.289 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:51.289 10:28:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.289 10:28:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.289 [2024-11-19 10:28:04.865334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:51.289 [2024-11-19 10:28:04.865422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.289 [2024-11-19 10:28:04.865454] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:51.289 [2024-11-19 10:28:04.865500] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.289 [2024-11-19 10:28:04.867540] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.289 [2024-11-19 10:28:04.867610] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:51.289 [2024-11-19 10:28:04.867721] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:51.289 [2024-11-19 10:28:04.867790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:51.289 [2024-11-19 10:28:04.867953] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:51.289 [2024-11-19 10:28:04.868016] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:51.289 [2024-11-19 10:28:04.868088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:51.289 [2024-11-19 10:28:04.868187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:51.289 [2024-11-19 10:28:04.868291] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:51.289 [2024-11-19 10:28:04.868328] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:51.289 [2024-11-19 10:28:04.868570] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:51.289 [2024-11-19 10:28:04.868738] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:51.289 [2024-11-19 10:28:04.868754] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:51.289 [2024-11-19 10:28:04.868894] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.289 pt1 00:16:51.289 10:28:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.289 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:51.289 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:51.289 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:51.289 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.289 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:51.289 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:51.289 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:51.289 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.289 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.289 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.289 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.289 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.289 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.289 10:28:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.289 10:28:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.289 10:28:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.289 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.289 "name": "raid_bdev1", 00:16:51.289 "uuid": "49252179-0d55-48e1-92bb-5d6509bc4057", 00:16:51.289 "strip_size_kb": 0, 00:16:51.289 "state": "online", 00:16:51.289 "raid_level": "raid1", 00:16:51.289 "superblock": true, 00:16:51.289 "num_base_bdevs": 2, 00:16:51.289 "num_base_bdevs_discovered": 1, 00:16:51.289 "num_base_bdevs_operational": 1, 00:16:51.289 "base_bdevs_list": [ 00:16:51.289 { 00:16:51.289 "name": null, 00:16:51.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.289 "is_configured": false, 00:16:51.289 "data_offset": 256, 00:16:51.289 "data_size": 7936 00:16:51.289 }, 00:16:51.289 { 00:16:51.289 "name": "pt2", 00:16:51.289 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:51.289 "is_configured": true, 00:16:51.289 "data_offset": 256, 00:16:51.289 "data_size": 7936 00:16:51.289 } 00:16:51.289 ] 00:16:51.289 }' 00:16:51.289 10:28:04 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.289 10:28:04 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.549 10:28:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:51.549 10:28:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.549 10:28:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.549 10:28:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:51.549 10:28:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.810 10:28:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:51.810 10:28:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:51.810 10:28:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:51.810 10:28:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.810 10:28:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.810 [2024-11-19 10:28:05.348687] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:51.810 10:28:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.810 10:28:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 49252179-0d55-48e1-92bb-5d6509bc4057 '!=' 49252179-0d55-48e1-92bb-5d6509bc4057 ']' 00:16:51.810 10:28:05 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 85846 00:16:51.810 10:28:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 85846 ']' 00:16:51.810 10:28:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 85846 00:16:51.810 10:28:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:16:51.810 10:28:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:51.810 10:28:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85846 00:16:51.810 killing process with pid 85846 00:16:51.810 10:28:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:51.810 10:28:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:51.810 10:28:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85846' 00:16:51.810 10:28:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 85846 00:16:51.810 [2024-11-19 10:28:05.430166] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:51.810 [2024-11-19 10:28:05.430229] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:51.810 [2024-11-19 10:28:05.430264] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:51.810 [2024-11-19 10:28:05.430276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:51.810 10:28:05 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 85846 00:16:52.070 [2024-11-19 10:28:05.620970] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:53.024 10:28:06 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:16:53.024 00:16:53.024 real 0m5.813s 00:16:53.024 user 0m8.768s 00:16:53.024 sys 0m1.125s 00:16:53.024 ************************************ 00:16:53.024 END TEST raid_superblock_test_4k 00:16:53.024 ************************************ 00:16:53.024 10:28:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:53.024 10:28:06 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.024 10:28:06 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:16:53.024 10:28:06 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:16:53.024 10:28:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:53.024 10:28:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:53.024 10:28:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:53.024 ************************************ 00:16:53.024 START TEST raid_rebuild_test_sb_4k 00:16:53.024 ************************************ 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86173 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86173 00:16:53.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86173 ']' 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:53.024 10:28:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.283 [2024-11-19 10:28:06.851826] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:16:53.283 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:53.283 Zero copy mechanism will not be used. 00:16:53.283 [2024-11-19 10:28:06.852422] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86173 ] 00:16:53.283 [2024-11-19 10:28:07.029504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.543 [2024-11-19 10:28:07.129733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.543 [2024-11-19 10:28:07.320179] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:53.543 [2024-11-19 10:28:07.320233] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:54.113 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:54.113 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:16:54.113 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:54.113 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:16:54.113 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.113 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.113 BaseBdev1_malloc 00:16:54.113 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.113 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:54.113 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.113 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.113 [2024-11-19 10:28:07.747189] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:54.113 [2024-11-19 10:28:07.747255] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.113 [2024-11-19 10:28:07.747280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:54.113 [2024-11-19 10:28:07.747298] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.113 [2024-11-19 10:28:07.749315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.113 [2024-11-19 10:28:07.749355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:54.113 BaseBdev1 00:16:54.113 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.113 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:54.113 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:16:54.113 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.113 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.113 BaseBdev2_malloc 00:16:54.113 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.113 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:54.113 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.113 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.113 [2024-11-19 10:28:07.799928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:54.113 [2024-11-19 10:28:07.800005] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.113 [2024-11-19 10:28:07.800026] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:54.113 [2024-11-19 10:28:07.800037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.113 [2024-11-19 10:28:07.801949] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.113 [2024-11-19 10:28:07.802061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:54.113 BaseBdev2 00:16:54.114 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.114 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:16:54.114 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.114 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.114 spare_malloc 00:16:54.114 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.114 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:54.114 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.114 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.114 spare_delay 00:16:54.114 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.114 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:54.114 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.114 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.374 [2024-11-19 10:28:07.897257] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:54.374 [2024-11-19 10:28:07.897368] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.374 [2024-11-19 10:28:07.897404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:54.374 [2024-11-19 10:28:07.897434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.374 [2024-11-19 10:28:07.899448] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.374 [2024-11-19 10:28:07.899523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:54.374 spare 00:16:54.374 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.374 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:54.374 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.374 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.374 [2024-11-19 10:28:07.909299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:54.374 [2024-11-19 10:28:07.911072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:54.374 [2024-11-19 10:28:07.911309] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:54.374 [2024-11-19 10:28:07.911372] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:54.374 [2024-11-19 10:28:07.911631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:54.374 [2024-11-19 10:28:07.911842] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:54.374 [2024-11-19 10:28:07.911884] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:54.374 [2024-11-19 10:28:07.912085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.374 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.374 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:54.374 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.374 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.374 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.374 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.374 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:54.374 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.374 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.374 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.374 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.374 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.374 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.374 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.374 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.374 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.374 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.374 "name": "raid_bdev1", 00:16:54.374 "uuid": "0716d9aa-f885-423a-9f2a-fbfc210e9cc0", 00:16:54.374 "strip_size_kb": 0, 00:16:54.374 "state": "online", 00:16:54.374 "raid_level": "raid1", 00:16:54.374 "superblock": true, 00:16:54.374 "num_base_bdevs": 2, 00:16:54.374 "num_base_bdevs_discovered": 2, 00:16:54.374 "num_base_bdevs_operational": 2, 00:16:54.374 "base_bdevs_list": [ 00:16:54.374 { 00:16:54.374 "name": "BaseBdev1", 00:16:54.374 "uuid": "540ac4de-96e3-58df-99f1-34a299a900da", 00:16:54.374 "is_configured": true, 00:16:54.374 "data_offset": 256, 00:16:54.374 "data_size": 7936 00:16:54.374 }, 00:16:54.374 { 00:16:54.374 "name": "BaseBdev2", 00:16:54.374 "uuid": "e75aa62a-af37-5536-b14a-5a5d9a214090", 00:16:54.374 "is_configured": true, 00:16:54.374 "data_offset": 256, 00:16:54.374 "data_size": 7936 00:16:54.374 } 00:16:54.374 ] 00:16:54.374 }' 00:16:54.374 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.374 10:28:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.635 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:54.635 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:54.635 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.635 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.635 [2024-11-19 10:28:08.396682] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:54.896 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.896 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:54.896 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.896 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:54.896 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.896 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.896 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.896 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:54.896 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:54.896 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:54.896 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:54.896 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:54.896 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:54.896 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:54.896 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:54.896 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:54.896 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:54.896 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:54.896 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:54.896 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:54.896 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:54.896 [2024-11-19 10:28:08.648027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:54.896 /dev/nbd0 00:16:55.156 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:55.156 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:55.156 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:55.156 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:16:55.156 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:55.156 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:55.156 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:55.157 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:16:55.157 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:55.157 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:55.157 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:55.157 1+0 records in 00:16:55.157 1+0 records out 00:16:55.157 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498652 s, 8.2 MB/s 00:16:55.157 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:55.157 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:16:55.157 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:55.157 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:55.157 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:16:55.157 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:55.157 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:55.157 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:55.157 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:55.157 10:28:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:55.724 7936+0 records in 00:16:55.724 7936+0 records out 00:16:55.724 32505856 bytes (33 MB, 31 MiB) copied, 0.611215 s, 53.2 MB/s 00:16:55.724 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:55.724 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:55.724 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:55.724 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:55.724 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:55.724 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:55.724 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:55.984 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:55.984 [2024-11-19 10:28:09.552011] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.984 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:55.984 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:55.984 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:55.984 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:55.984 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:55.984 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:55.984 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:55.984 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:55.984 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.984 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.984 [2024-11-19 10:28:09.567046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:55.984 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.984 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:55.984 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.984 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.984 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.984 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.984 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:55.984 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.984 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.984 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.984 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.984 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.984 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.984 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.984 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.984 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.984 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.984 "name": "raid_bdev1", 00:16:55.984 "uuid": "0716d9aa-f885-423a-9f2a-fbfc210e9cc0", 00:16:55.984 "strip_size_kb": 0, 00:16:55.984 "state": "online", 00:16:55.984 "raid_level": "raid1", 00:16:55.984 "superblock": true, 00:16:55.984 "num_base_bdevs": 2, 00:16:55.984 "num_base_bdevs_discovered": 1, 00:16:55.985 "num_base_bdevs_operational": 1, 00:16:55.985 "base_bdevs_list": [ 00:16:55.985 { 00:16:55.985 "name": null, 00:16:55.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.985 "is_configured": false, 00:16:55.985 "data_offset": 0, 00:16:55.985 "data_size": 7936 00:16:55.985 }, 00:16:55.985 { 00:16:55.985 "name": "BaseBdev2", 00:16:55.985 "uuid": "e75aa62a-af37-5536-b14a-5a5d9a214090", 00:16:55.985 "is_configured": true, 00:16:55.985 "data_offset": 256, 00:16:55.985 "data_size": 7936 00:16:55.985 } 00:16:55.985 ] 00:16:55.985 }' 00:16:55.985 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.985 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.245 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:56.245 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.245 10:28:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.245 [2024-11-19 10:28:09.998268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:56.245 [2024-11-19 10:28:10.014811] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:16:56.245 10:28:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.245 10:28:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:56.245 [2024-11-19 10:28:10.016739] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:57.628 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.628 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.628 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.628 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.628 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.628 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.628 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.628 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.628 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.628 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.628 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.628 "name": "raid_bdev1", 00:16:57.628 "uuid": "0716d9aa-f885-423a-9f2a-fbfc210e9cc0", 00:16:57.628 "strip_size_kb": 0, 00:16:57.628 "state": "online", 00:16:57.628 "raid_level": "raid1", 00:16:57.628 "superblock": true, 00:16:57.628 "num_base_bdevs": 2, 00:16:57.628 "num_base_bdevs_discovered": 2, 00:16:57.628 "num_base_bdevs_operational": 2, 00:16:57.628 "process": { 00:16:57.628 "type": "rebuild", 00:16:57.628 "target": "spare", 00:16:57.628 "progress": { 00:16:57.628 "blocks": 2560, 00:16:57.628 "percent": 32 00:16:57.628 } 00:16:57.628 }, 00:16:57.628 "base_bdevs_list": [ 00:16:57.628 { 00:16:57.628 "name": "spare", 00:16:57.628 "uuid": "ff52fd12-f52c-52e8-a502-f089cbd95d01", 00:16:57.628 "is_configured": true, 00:16:57.628 "data_offset": 256, 00:16:57.628 "data_size": 7936 00:16:57.628 }, 00:16:57.628 { 00:16:57.628 "name": "BaseBdev2", 00:16:57.628 "uuid": "e75aa62a-af37-5536-b14a-5a5d9a214090", 00:16:57.628 "is_configured": true, 00:16:57.628 "data_offset": 256, 00:16:57.628 "data_size": 7936 00:16:57.628 } 00:16:57.628 ] 00:16:57.628 }' 00:16:57.628 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.628 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.628 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.628 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.628 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:57.628 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.628 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.628 [2024-11-19 10:28:11.168043] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:57.628 [2024-11-19 10:28:11.221307] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:57.628 [2024-11-19 10:28:11.221364] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.628 [2024-11-19 10:28:11.221378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:57.628 [2024-11-19 10:28:11.221386] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:57.628 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.628 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:57.628 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.628 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.628 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.628 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.628 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:57.628 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.628 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.629 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.629 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.629 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.629 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.629 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.629 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.629 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.629 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.629 "name": "raid_bdev1", 00:16:57.629 "uuid": "0716d9aa-f885-423a-9f2a-fbfc210e9cc0", 00:16:57.629 "strip_size_kb": 0, 00:16:57.629 "state": "online", 00:16:57.629 "raid_level": "raid1", 00:16:57.629 "superblock": true, 00:16:57.629 "num_base_bdevs": 2, 00:16:57.629 "num_base_bdevs_discovered": 1, 00:16:57.629 "num_base_bdevs_operational": 1, 00:16:57.629 "base_bdevs_list": [ 00:16:57.629 { 00:16:57.629 "name": null, 00:16:57.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.629 "is_configured": false, 00:16:57.629 "data_offset": 0, 00:16:57.629 "data_size": 7936 00:16:57.629 }, 00:16:57.629 { 00:16:57.629 "name": "BaseBdev2", 00:16:57.629 "uuid": "e75aa62a-af37-5536-b14a-5a5d9a214090", 00:16:57.629 "is_configured": true, 00:16:57.629 "data_offset": 256, 00:16:57.629 "data_size": 7936 00:16:57.629 } 00:16:57.629 ] 00:16:57.629 }' 00:16:57.629 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.629 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.889 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:57.889 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.889 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:57.889 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:57.889 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.889 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.889 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.889 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.889 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.150 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.150 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.150 "name": "raid_bdev1", 00:16:58.150 "uuid": "0716d9aa-f885-423a-9f2a-fbfc210e9cc0", 00:16:58.150 "strip_size_kb": 0, 00:16:58.150 "state": "online", 00:16:58.150 "raid_level": "raid1", 00:16:58.150 "superblock": true, 00:16:58.150 "num_base_bdevs": 2, 00:16:58.150 "num_base_bdevs_discovered": 1, 00:16:58.150 "num_base_bdevs_operational": 1, 00:16:58.150 "base_bdevs_list": [ 00:16:58.150 { 00:16:58.150 "name": null, 00:16:58.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:58.150 "is_configured": false, 00:16:58.150 "data_offset": 0, 00:16:58.150 "data_size": 7936 00:16:58.150 }, 00:16:58.150 { 00:16:58.150 "name": "BaseBdev2", 00:16:58.150 "uuid": "e75aa62a-af37-5536-b14a-5a5d9a214090", 00:16:58.150 "is_configured": true, 00:16:58.150 "data_offset": 256, 00:16:58.150 "data_size": 7936 00:16:58.150 } 00:16:58.150 ] 00:16:58.150 }' 00:16:58.150 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.150 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:58.150 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.150 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:58.150 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:58.150 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.150 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:58.150 [2024-11-19 10:28:11.796320] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:58.150 [2024-11-19 10:28:11.811320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:16:58.150 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.150 10:28:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:58.150 [2024-11-19 10:28:11.813186] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:59.094 10:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.094 10:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.094 10:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.094 10:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.094 10:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.094 10:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.094 10:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.094 10:28:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.094 10:28:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.094 10:28:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.094 10:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.094 "name": "raid_bdev1", 00:16:59.094 "uuid": "0716d9aa-f885-423a-9f2a-fbfc210e9cc0", 00:16:59.094 "strip_size_kb": 0, 00:16:59.095 "state": "online", 00:16:59.095 "raid_level": "raid1", 00:16:59.095 "superblock": true, 00:16:59.095 "num_base_bdevs": 2, 00:16:59.095 "num_base_bdevs_discovered": 2, 00:16:59.095 "num_base_bdevs_operational": 2, 00:16:59.095 "process": { 00:16:59.095 "type": "rebuild", 00:16:59.095 "target": "spare", 00:16:59.095 "progress": { 00:16:59.095 "blocks": 2560, 00:16:59.095 "percent": 32 00:16:59.095 } 00:16:59.095 }, 00:16:59.095 "base_bdevs_list": [ 00:16:59.095 { 00:16:59.095 "name": "spare", 00:16:59.095 "uuid": "ff52fd12-f52c-52e8-a502-f089cbd95d01", 00:16:59.095 "is_configured": true, 00:16:59.095 "data_offset": 256, 00:16:59.095 "data_size": 7936 00:16:59.095 }, 00:16:59.095 { 00:16:59.095 "name": "BaseBdev2", 00:16:59.095 "uuid": "e75aa62a-af37-5536-b14a-5a5d9a214090", 00:16:59.095 "is_configured": true, 00:16:59.095 "data_offset": 256, 00:16:59.095 "data_size": 7936 00:16:59.095 } 00:16:59.095 ] 00:16:59.095 }' 00:16:59.360 10:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.360 10:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.360 10:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.360 10:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.360 10:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:59.360 10:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:59.360 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:59.360 10:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:59.360 10:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:59.360 10:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:59.360 10:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=657 00:16:59.360 10:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:59.360 10:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.360 10:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.360 10:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.360 10:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.360 10:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.360 10:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.360 10:28:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.360 10:28:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.360 10:28:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.360 10:28:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.360 10:28:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.360 "name": "raid_bdev1", 00:16:59.360 "uuid": "0716d9aa-f885-423a-9f2a-fbfc210e9cc0", 00:16:59.360 "strip_size_kb": 0, 00:16:59.360 "state": "online", 00:16:59.360 "raid_level": "raid1", 00:16:59.360 "superblock": true, 00:16:59.360 "num_base_bdevs": 2, 00:16:59.360 "num_base_bdevs_discovered": 2, 00:16:59.360 "num_base_bdevs_operational": 2, 00:16:59.360 "process": { 00:16:59.360 "type": "rebuild", 00:16:59.360 "target": "spare", 00:16:59.360 "progress": { 00:16:59.360 "blocks": 2816, 00:16:59.360 "percent": 35 00:16:59.360 } 00:16:59.360 }, 00:16:59.360 "base_bdevs_list": [ 00:16:59.360 { 00:16:59.360 "name": "spare", 00:16:59.360 "uuid": "ff52fd12-f52c-52e8-a502-f089cbd95d01", 00:16:59.360 "is_configured": true, 00:16:59.360 "data_offset": 256, 00:16:59.360 "data_size": 7936 00:16:59.360 }, 00:16:59.360 { 00:16:59.360 "name": "BaseBdev2", 00:16:59.360 "uuid": "e75aa62a-af37-5536-b14a-5a5d9a214090", 00:16:59.360 "is_configured": true, 00:16:59.360 "data_offset": 256, 00:16:59.360 "data_size": 7936 00:16:59.360 } 00:16:59.360 ] 00:16:59.360 }' 00:16:59.360 10:28:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.360 10:28:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.361 10:28:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.361 10:28:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.361 10:28:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:00.744 10:28:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:00.744 10:28:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:00.744 10:28:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.744 10:28:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:00.744 10:28:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:00.744 10:28:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.744 10:28:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.744 10:28:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.744 10:28:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.744 10:28:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:00.744 10:28:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.744 10:28:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.744 "name": "raid_bdev1", 00:17:00.744 "uuid": "0716d9aa-f885-423a-9f2a-fbfc210e9cc0", 00:17:00.744 "strip_size_kb": 0, 00:17:00.744 "state": "online", 00:17:00.744 "raid_level": "raid1", 00:17:00.744 "superblock": true, 00:17:00.744 "num_base_bdevs": 2, 00:17:00.744 "num_base_bdevs_discovered": 2, 00:17:00.744 "num_base_bdevs_operational": 2, 00:17:00.744 "process": { 00:17:00.744 "type": "rebuild", 00:17:00.744 "target": "spare", 00:17:00.744 "progress": { 00:17:00.744 "blocks": 5632, 00:17:00.744 "percent": 70 00:17:00.744 } 00:17:00.744 }, 00:17:00.744 "base_bdevs_list": [ 00:17:00.744 { 00:17:00.744 "name": "spare", 00:17:00.744 "uuid": "ff52fd12-f52c-52e8-a502-f089cbd95d01", 00:17:00.744 "is_configured": true, 00:17:00.744 "data_offset": 256, 00:17:00.744 "data_size": 7936 00:17:00.744 }, 00:17:00.744 { 00:17:00.744 "name": "BaseBdev2", 00:17:00.744 "uuid": "e75aa62a-af37-5536-b14a-5a5d9a214090", 00:17:00.744 "is_configured": true, 00:17:00.744 "data_offset": 256, 00:17:00.744 "data_size": 7936 00:17:00.744 } 00:17:00.744 ] 00:17:00.744 }' 00:17:00.744 10:28:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.744 10:28:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:00.744 10:28:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.744 10:28:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:00.744 10:28:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:01.314 [2024-11-19 10:28:14.924610] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:01.314 [2024-11-19 10:28:14.924675] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:01.314 [2024-11-19 10:28:14.924763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.574 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:01.574 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.574 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.574 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.574 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.574 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.574 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.574 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.574 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.574 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.574 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.574 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.574 "name": "raid_bdev1", 00:17:01.574 "uuid": "0716d9aa-f885-423a-9f2a-fbfc210e9cc0", 00:17:01.574 "strip_size_kb": 0, 00:17:01.574 "state": "online", 00:17:01.574 "raid_level": "raid1", 00:17:01.574 "superblock": true, 00:17:01.574 "num_base_bdevs": 2, 00:17:01.574 "num_base_bdevs_discovered": 2, 00:17:01.574 "num_base_bdevs_operational": 2, 00:17:01.574 "base_bdevs_list": [ 00:17:01.574 { 00:17:01.574 "name": "spare", 00:17:01.574 "uuid": "ff52fd12-f52c-52e8-a502-f089cbd95d01", 00:17:01.574 "is_configured": true, 00:17:01.574 "data_offset": 256, 00:17:01.574 "data_size": 7936 00:17:01.574 }, 00:17:01.574 { 00:17:01.574 "name": "BaseBdev2", 00:17:01.574 "uuid": "e75aa62a-af37-5536-b14a-5a5d9a214090", 00:17:01.574 "is_configured": true, 00:17:01.574 "data_offset": 256, 00:17:01.574 "data_size": 7936 00:17:01.574 } 00:17:01.574 ] 00:17:01.574 }' 00:17:01.574 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.574 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:01.574 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.835 "name": "raid_bdev1", 00:17:01.835 "uuid": "0716d9aa-f885-423a-9f2a-fbfc210e9cc0", 00:17:01.835 "strip_size_kb": 0, 00:17:01.835 "state": "online", 00:17:01.835 "raid_level": "raid1", 00:17:01.835 "superblock": true, 00:17:01.835 "num_base_bdevs": 2, 00:17:01.835 "num_base_bdevs_discovered": 2, 00:17:01.835 "num_base_bdevs_operational": 2, 00:17:01.835 "base_bdevs_list": [ 00:17:01.835 { 00:17:01.835 "name": "spare", 00:17:01.835 "uuid": "ff52fd12-f52c-52e8-a502-f089cbd95d01", 00:17:01.835 "is_configured": true, 00:17:01.835 "data_offset": 256, 00:17:01.835 "data_size": 7936 00:17:01.835 }, 00:17:01.835 { 00:17:01.835 "name": "BaseBdev2", 00:17:01.835 "uuid": "e75aa62a-af37-5536-b14a-5a5d9a214090", 00:17:01.835 "is_configured": true, 00:17:01.835 "data_offset": 256, 00:17:01.835 "data_size": 7936 00:17:01.835 } 00:17:01.835 ] 00:17:01.835 }' 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.835 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.835 "name": "raid_bdev1", 00:17:01.836 "uuid": "0716d9aa-f885-423a-9f2a-fbfc210e9cc0", 00:17:01.836 "strip_size_kb": 0, 00:17:01.836 "state": "online", 00:17:01.836 "raid_level": "raid1", 00:17:01.836 "superblock": true, 00:17:01.836 "num_base_bdevs": 2, 00:17:01.836 "num_base_bdevs_discovered": 2, 00:17:01.836 "num_base_bdevs_operational": 2, 00:17:01.836 "base_bdevs_list": [ 00:17:01.836 { 00:17:01.836 "name": "spare", 00:17:01.836 "uuid": "ff52fd12-f52c-52e8-a502-f089cbd95d01", 00:17:01.836 "is_configured": true, 00:17:01.836 "data_offset": 256, 00:17:01.836 "data_size": 7936 00:17:01.836 }, 00:17:01.836 { 00:17:01.836 "name": "BaseBdev2", 00:17:01.836 "uuid": "e75aa62a-af37-5536-b14a-5a5d9a214090", 00:17:01.836 "is_configured": true, 00:17:01.836 "data_offset": 256, 00:17:01.836 "data_size": 7936 00:17:01.836 } 00:17:01.836 ] 00:17:01.836 }' 00:17:01.836 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.836 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.406 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:02.406 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.406 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.406 [2024-11-19 10:28:15.991704] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.406 [2024-11-19 10:28:15.991781] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:02.406 [2024-11-19 10:28:15.991891] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.406 [2024-11-19 10:28:15.991986] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.406 [2024-11-19 10:28:15.992047] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:02.406 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.406 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.406 10:28:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:17:02.406 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.406 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.406 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.406 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:02.406 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:02.406 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:02.406 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:02.406 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:02.406 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:02.406 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:02.406 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:02.406 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:02.406 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:02.406 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:02.406 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:02.406 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:02.666 /dev/nbd0 00:17:02.666 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:02.666 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:02.666 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:02.666 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:02.666 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:02.666 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:02.666 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:02.666 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:02.666 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:02.666 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:02.666 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:02.666 1+0 records in 00:17:02.666 1+0 records out 00:17:02.666 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384693 s, 10.6 MB/s 00:17:02.666 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.666 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:02.666 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.666 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:02.666 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:02.666 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:02.666 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:02.666 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:02.927 /dev/nbd1 00:17:02.927 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:02.927 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:02.927 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:02.927 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:02.927 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:02.927 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:02.927 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:02.927 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:02.927 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:02.927 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:02.927 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:02.927 1+0 records in 00:17:02.927 1+0 records out 00:17:02.927 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530437 s, 7.7 MB/s 00:17:02.927 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.927 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:02.927 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:02.927 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:02.927 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:02.927 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:02.927 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:02.927 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:03.186 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:03.186 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:03.186 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:03.186 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:03.186 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:03.186 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:03.186 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:03.186 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:03.186 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:03.186 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:03.186 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:03.186 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:03.186 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:03.186 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:03.446 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:03.446 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:03.446 10:28:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:03.446 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:03.446 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:03.446 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:03.446 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:03.446 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:03.446 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:03.446 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:03.446 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:03.446 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:03.446 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:03.446 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.446 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.446 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.446 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:03.446 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.446 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.446 [2024-11-19 10:28:17.207109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:03.446 [2024-11-19 10:28:17.207168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.446 [2024-11-19 10:28:17.207194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:03.446 [2024-11-19 10:28:17.207203] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.446 [2024-11-19 10:28:17.209732] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.446 [2024-11-19 10:28:17.209771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:03.446 [2024-11-19 10:28:17.209868] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:03.446 [2024-11-19 10:28:17.209930] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:03.446 [2024-11-19 10:28:17.210122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:03.446 spare 00:17:03.446 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.446 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:03.446 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.446 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.706 [2024-11-19 10:28:17.310051] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:03.706 [2024-11-19 10:28:17.310081] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:03.706 [2024-11-19 10:28:17.310360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:03.706 [2024-11-19 10:28:17.310544] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:03.706 [2024-11-19 10:28:17.310554] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:03.706 [2024-11-19 10:28:17.310707] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.706 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.706 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:03.706 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:03.706 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.706 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:03.706 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:03.706 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:03.706 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.706 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.706 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.706 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.706 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.706 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.706 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.706 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.706 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.706 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.706 "name": "raid_bdev1", 00:17:03.706 "uuid": "0716d9aa-f885-423a-9f2a-fbfc210e9cc0", 00:17:03.706 "strip_size_kb": 0, 00:17:03.706 "state": "online", 00:17:03.706 "raid_level": "raid1", 00:17:03.706 "superblock": true, 00:17:03.706 "num_base_bdevs": 2, 00:17:03.706 "num_base_bdevs_discovered": 2, 00:17:03.706 "num_base_bdevs_operational": 2, 00:17:03.706 "base_bdevs_list": [ 00:17:03.706 { 00:17:03.706 "name": "spare", 00:17:03.706 "uuid": "ff52fd12-f52c-52e8-a502-f089cbd95d01", 00:17:03.706 "is_configured": true, 00:17:03.706 "data_offset": 256, 00:17:03.706 "data_size": 7936 00:17:03.706 }, 00:17:03.706 { 00:17:03.706 "name": "BaseBdev2", 00:17:03.706 "uuid": "e75aa62a-af37-5536-b14a-5a5d9a214090", 00:17:03.706 "is_configured": true, 00:17:03.706 "data_offset": 256, 00:17:03.706 "data_size": 7936 00:17:03.706 } 00:17:03.706 ] 00:17:03.706 }' 00:17:03.706 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.706 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.275 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:04.275 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.275 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:04.275 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:04.275 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.275 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.275 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.275 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.275 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.275 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.275 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.275 "name": "raid_bdev1", 00:17:04.275 "uuid": "0716d9aa-f885-423a-9f2a-fbfc210e9cc0", 00:17:04.275 "strip_size_kb": 0, 00:17:04.275 "state": "online", 00:17:04.275 "raid_level": "raid1", 00:17:04.275 "superblock": true, 00:17:04.275 "num_base_bdevs": 2, 00:17:04.275 "num_base_bdevs_discovered": 2, 00:17:04.275 "num_base_bdevs_operational": 2, 00:17:04.275 "base_bdevs_list": [ 00:17:04.275 { 00:17:04.275 "name": "spare", 00:17:04.275 "uuid": "ff52fd12-f52c-52e8-a502-f089cbd95d01", 00:17:04.275 "is_configured": true, 00:17:04.275 "data_offset": 256, 00:17:04.275 "data_size": 7936 00:17:04.275 }, 00:17:04.275 { 00:17:04.275 "name": "BaseBdev2", 00:17:04.275 "uuid": "e75aa62a-af37-5536-b14a-5a5d9a214090", 00:17:04.275 "is_configured": true, 00:17:04.275 "data_offset": 256, 00:17:04.275 "data_size": 7936 00:17:04.275 } 00:17:04.275 ] 00:17:04.275 }' 00:17:04.275 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.275 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:04.275 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.275 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:04.275 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.275 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.275 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.275 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:04.275 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.276 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.276 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:04.276 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.276 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.276 [2024-11-19 10:28:17.937887] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:04.276 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.276 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:04.276 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:04.276 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.276 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:04.276 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:04.276 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:04.276 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.276 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.276 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.276 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.276 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.276 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.276 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.276 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.276 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.276 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.276 "name": "raid_bdev1", 00:17:04.276 "uuid": "0716d9aa-f885-423a-9f2a-fbfc210e9cc0", 00:17:04.276 "strip_size_kb": 0, 00:17:04.276 "state": "online", 00:17:04.276 "raid_level": "raid1", 00:17:04.276 "superblock": true, 00:17:04.276 "num_base_bdevs": 2, 00:17:04.276 "num_base_bdevs_discovered": 1, 00:17:04.276 "num_base_bdevs_operational": 1, 00:17:04.276 "base_bdevs_list": [ 00:17:04.276 { 00:17:04.276 "name": null, 00:17:04.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.276 "is_configured": false, 00:17:04.276 "data_offset": 0, 00:17:04.276 "data_size": 7936 00:17:04.276 }, 00:17:04.276 { 00:17:04.276 "name": "BaseBdev2", 00:17:04.276 "uuid": "e75aa62a-af37-5536-b14a-5a5d9a214090", 00:17:04.276 "is_configured": true, 00:17:04.276 "data_offset": 256, 00:17:04.276 "data_size": 7936 00:17:04.276 } 00:17:04.276 ] 00:17:04.276 }' 00:17:04.276 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.276 10:28:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.843 10:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:04.843 10:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.843 10:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.843 [2024-11-19 10:28:18.389139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:04.843 [2024-11-19 10:28:18.389370] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:04.843 [2024-11-19 10:28:18.389444] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:04.843 [2024-11-19 10:28:18.389500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:04.843 [2024-11-19 10:28:18.406586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:04.843 10:28:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.844 10:28:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:04.844 [2024-11-19 10:28:18.408723] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:05.783 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:05.783 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.783 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:05.783 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:05.783 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.783 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.783 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.783 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.783 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.783 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.783 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.783 "name": "raid_bdev1", 00:17:05.783 "uuid": "0716d9aa-f885-423a-9f2a-fbfc210e9cc0", 00:17:05.783 "strip_size_kb": 0, 00:17:05.783 "state": "online", 00:17:05.783 "raid_level": "raid1", 00:17:05.783 "superblock": true, 00:17:05.783 "num_base_bdevs": 2, 00:17:05.783 "num_base_bdevs_discovered": 2, 00:17:05.783 "num_base_bdevs_operational": 2, 00:17:05.783 "process": { 00:17:05.783 "type": "rebuild", 00:17:05.783 "target": "spare", 00:17:05.783 "progress": { 00:17:05.783 "blocks": 2560, 00:17:05.783 "percent": 32 00:17:05.783 } 00:17:05.783 }, 00:17:05.783 "base_bdevs_list": [ 00:17:05.783 { 00:17:05.783 "name": "spare", 00:17:05.783 "uuid": "ff52fd12-f52c-52e8-a502-f089cbd95d01", 00:17:05.783 "is_configured": true, 00:17:05.783 "data_offset": 256, 00:17:05.783 "data_size": 7936 00:17:05.783 }, 00:17:05.783 { 00:17:05.783 "name": "BaseBdev2", 00:17:05.783 "uuid": "e75aa62a-af37-5536-b14a-5a5d9a214090", 00:17:05.783 "is_configured": true, 00:17:05.783 "data_offset": 256, 00:17:05.783 "data_size": 7936 00:17:05.783 } 00:17:05.783 ] 00:17:05.783 }' 00:17:05.783 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.783 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:05.783 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.783 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:05.783 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:05.783 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.783 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:05.783 [2024-11-19 10:28:19.547796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:06.044 [2024-11-19 10:28:19.617282] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:06.044 [2024-11-19 10:28:19.617343] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.044 [2024-11-19 10:28:19.617358] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:06.044 [2024-11-19 10:28:19.617367] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:06.044 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.044 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:06.044 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.044 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.044 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.044 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.044 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:06.044 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.044 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.044 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.044 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.044 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.044 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.044 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.044 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.044 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.044 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.044 "name": "raid_bdev1", 00:17:06.044 "uuid": "0716d9aa-f885-423a-9f2a-fbfc210e9cc0", 00:17:06.044 "strip_size_kb": 0, 00:17:06.044 "state": "online", 00:17:06.044 "raid_level": "raid1", 00:17:06.044 "superblock": true, 00:17:06.044 "num_base_bdevs": 2, 00:17:06.044 "num_base_bdevs_discovered": 1, 00:17:06.044 "num_base_bdevs_operational": 1, 00:17:06.044 "base_bdevs_list": [ 00:17:06.044 { 00:17:06.044 "name": null, 00:17:06.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.044 "is_configured": false, 00:17:06.044 "data_offset": 0, 00:17:06.044 "data_size": 7936 00:17:06.044 }, 00:17:06.044 { 00:17:06.044 "name": "BaseBdev2", 00:17:06.044 "uuid": "e75aa62a-af37-5536-b14a-5a5d9a214090", 00:17:06.044 "is_configured": true, 00:17:06.044 "data_offset": 256, 00:17:06.044 "data_size": 7936 00:17:06.044 } 00:17:06.044 ] 00:17:06.044 }' 00:17:06.044 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.044 10:28:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.615 10:28:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:06.615 10:28:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.615 10:28:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.615 [2024-11-19 10:28:20.091970] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:06.615 [2024-11-19 10:28:20.092101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.615 [2024-11-19 10:28:20.092143] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:06.615 [2024-11-19 10:28:20.092180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.615 [2024-11-19 10:28:20.092756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.615 [2024-11-19 10:28:20.092824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:06.615 [2024-11-19 10:28:20.092939] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:06.615 [2024-11-19 10:28:20.092980] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:06.615 [2024-11-19 10:28:20.093037] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:06.615 [2024-11-19 10:28:20.093101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:06.615 [2024-11-19 10:28:20.109089] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:06.615 spare 00:17:06.615 10:28:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.615 10:28:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:06.615 [2024-11-19 10:28:20.111162] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:07.556 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:07.556 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:07.556 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:07.556 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:07.556 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:07.556 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.556 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.557 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.557 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.557 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.557 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:07.557 "name": "raid_bdev1", 00:17:07.557 "uuid": "0716d9aa-f885-423a-9f2a-fbfc210e9cc0", 00:17:07.557 "strip_size_kb": 0, 00:17:07.557 "state": "online", 00:17:07.557 "raid_level": "raid1", 00:17:07.557 "superblock": true, 00:17:07.557 "num_base_bdevs": 2, 00:17:07.557 "num_base_bdevs_discovered": 2, 00:17:07.557 "num_base_bdevs_operational": 2, 00:17:07.557 "process": { 00:17:07.557 "type": "rebuild", 00:17:07.557 "target": "spare", 00:17:07.557 "progress": { 00:17:07.557 "blocks": 2560, 00:17:07.557 "percent": 32 00:17:07.557 } 00:17:07.557 }, 00:17:07.557 "base_bdevs_list": [ 00:17:07.557 { 00:17:07.557 "name": "spare", 00:17:07.557 "uuid": "ff52fd12-f52c-52e8-a502-f089cbd95d01", 00:17:07.557 "is_configured": true, 00:17:07.557 "data_offset": 256, 00:17:07.557 "data_size": 7936 00:17:07.557 }, 00:17:07.557 { 00:17:07.557 "name": "BaseBdev2", 00:17:07.557 "uuid": "e75aa62a-af37-5536-b14a-5a5d9a214090", 00:17:07.557 "is_configured": true, 00:17:07.557 "data_offset": 256, 00:17:07.557 "data_size": 7936 00:17:07.557 } 00:17:07.557 ] 00:17:07.557 }' 00:17:07.557 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:07.557 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:07.557 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:07.557 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:07.557 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:07.557 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.557 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.557 [2024-11-19 10:28:21.274972] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:07.557 [2024-11-19 10:28:21.319653] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:07.557 [2024-11-19 10:28:21.319760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.557 [2024-11-19 10:28:21.319813] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:07.557 [2024-11-19 10:28:21.319835] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:07.818 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.818 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:07.818 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.818 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.818 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.818 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.818 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:07.818 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.818 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.818 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.818 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.818 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.818 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.818 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.818 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.818 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.818 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.818 "name": "raid_bdev1", 00:17:07.818 "uuid": "0716d9aa-f885-423a-9f2a-fbfc210e9cc0", 00:17:07.818 "strip_size_kb": 0, 00:17:07.818 "state": "online", 00:17:07.818 "raid_level": "raid1", 00:17:07.818 "superblock": true, 00:17:07.818 "num_base_bdevs": 2, 00:17:07.818 "num_base_bdevs_discovered": 1, 00:17:07.818 "num_base_bdevs_operational": 1, 00:17:07.818 "base_bdevs_list": [ 00:17:07.818 { 00:17:07.818 "name": null, 00:17:07.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.818 "is_configured": false, 00:17:07.818 "data_offset": 0, 00:17:07.818 "data_size": 7936 00:17:07.818 }, 00:17:07.818 { 00:17:07.818 "name": "BaseBdev2", 00:17:07.818 "uuid": "e75aa62a-af37-5536-b14a-5a5d9a214090", 00:17:07.818 "is_configured": true, 00:17:07.818 "data_offset": 256, 00:17:07.818 "data_size": 7936 00:17:07.818 } 00:17:07.818 ] 00:17:07.818 }' 00:17:07.818 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.818 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.078 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:08.078 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.078 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:08.078 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:08.078 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.078 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.078 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.078 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.078 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.078 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.078 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.078 "name": "raid_bdev1", 00:17:08.078 "uuid": "0716d9aa-f885-423a-9f2a-fbfc210e9cc0", 00:17:08.078 "strip_size_kb": 0, 00:17:08.078 "state": "online", 00:17:08.078 "raid_level": "raid1", 00:17:08.078 "superblock": true, 00:17:08.078 "num_base_bdevs": 2, 00:17:08.078 "num_base_bdevs_discovered": 1, 00:17:08.078 "num_base_bdevs_operational": 1, 00:17:08.078 "base_bdevs_list": [ 00:17:08.078 { 00:17:08.078 "name": null, 00:17:08.078 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.078 "is_configured": false, 00:17:08.078 "data_offset": 0, 00:17:08.078 "data_size": 7936 00:17:08.078 }, 00:17:08.078 { 00:17:08.078 "name": "BaseBdev2", 00:17:08.078 "uuid": "e75aa62a-af37-5536-b14a-5a5d9a214090", 00:17:08.078 "is_configured": true, 00:17:08.078 "data_offset": 256, 00:17:08.078 "data_size": 7936 00:17:08.078 } 00:17:08.078 ] 00:17:08.078 }' 00:17:08.078 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.338 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:08.338 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:08.338 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:08.338 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:08.338 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.338 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.338 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.338 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:08.338 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.338 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.338 [2024-11-19 10:28:21.948154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:08.338 [2024-11-19 10:28:21.948250] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.338 [2024-11-19 10:28:21.948281] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:08.338 [2024-11-19 10:28:21.948301] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.338 [2024-11-19 10:28:21.948817] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.338 [2024-11-19 10:28:21.948835] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:08.338 [2024-11-19 10:28:21.948912] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:08.338 [2024-11-19 10:28:21.948927] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:08.338 [2024-11-19 10:28:21.948940] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:08.338 [2024-11-19 10:28:21.948949] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:08.338 BaseBdev1 00:17:08.338 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.338 10:28:21 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:09.277 10:28:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:09.277 10:28:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.277 10:28:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.277 10:28:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:09.277 10:28:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:09.277 10:28:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:09.277 10:28:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.277 10:28:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.277 10:28:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.277 10:28:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.277 10:28:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.277 10:28:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.277 10:28:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.277 10:28:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.277 10:28:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.277 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.277 "name": "raid_bdev1", 00:17:09.277 "uuid": "0716d9aa-f885-423a-9f2a-fbfc210e9cc0", 00:17:09.277 "strip_size_kb": 0, 00:17:09.277 "state": "online", 00:17:09.277 "raid_level": "raid1", 00:17:09.277 "superblock": true, 00:17:09.277 "num_base_bdevs": 2, 00:17:09.277 "num_base_bdevs_discovered": 1, 00:17:09.277 "num_base_bdevs_operational": 1, 00:17:09.277 "base_bdevs_list": [ 00:17:09.277 { 00:17:09.277 "name": null, 00:17:09.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.277 "is_configured": false, 00:17:09.277 "data_offset": 0, 00:17:09.277 "data_size": 7936 00:17:09.277 }, 00:17:09.277 { 00:17:09.277 "name": "BaseBdev2", 00:17:09.277 "uuid": "e75aa62a-af37-5536-b14a-5a5d9a214090", 00:17:09.277 "is_configured": true, 00:17:09.277 "data_offset": 256, 00:17:09.277 "data_size": 7936 00:17:09.277 } 00:17:09.277 ] 00:17:09.277 }' 00:17:09.277 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.278 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.847 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:09.847 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.847 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:09.847 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:09.847 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.847 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.847 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.847 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.847 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.847 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.847 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.847 "name": "raid_bdev1", 00:17:09.847 "uuid": "0716d9aa-f885-423a-9f2a-fbfc210e9cc0", 00:17:09.847 "strip_size_kb": 0, 00:17:09.847 "state": "online", 00:17:09.847 "raid_level": "raid1", 00:17:09.847 "superblock": true, 00:17:09.847 "num_base_bdevs": 2, 00:17:09.847 "num_base_bdevs_discovered": 1, 00:17:09.847 "num_base_bdevs_operational": 1, 00:17:09.847 "base_bdevs_list": [ 00:17:09.847 { 00:17:09.847 "name": null, 00:17:09.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.847 "is_configured": false, 00:17:09.847 "data_offset": 0, 00:17:09.847 "data_size": 7936 00:17:09.847 }, 00:17:09.847 { 00:17:09.847 "name": "BaseBdev2", 00:17:09.847 "uuid": "e75aa62a-af37-5536-b14a-5a5d9a214090", 00:17:09.847 "is_configured": true, 00:17:09.847 "data_offset": 256, 00:17:09.847 "data_size": 7936 00:17:09.847 } 00:17:09.847 ] 00:17:09.847 }' 00:17:09.847 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.847 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:09.847 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.847 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:09.847 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:09.847 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:09.847 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:09.847 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:09.847 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.847 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:09.847 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.847 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:09.847 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.847 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.847 [2024-11-19 10:28:23.553444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:09.847 [2024-11-19 10:28:23.553648] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:09.847 [2024-11-19 10:28:23.553705] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:09.847 request: 00:17:09.847 { 00:17:09.847 "base_bdev": "BaseBdev1", 00:17:09.847 "raid_bdev": "raid_bdev1", 00:17:09.847 "method": "bdev_raid_add_base_bdev", 00:17:09.847 "req_id": 1 00:17:09.847 } 00:17:09.847 Got JSON-RPC error response 00:17:09.847 response: 00:17:09.847 { 00:17:09.847 "code": -22, 00:17:09.847 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:09.847 } 00:17:09.847 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:09.847 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:17:09.847 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:09.847 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:09.847 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:09.847 10:28:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:11.227 10:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:11.227 10:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.227 10:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.227 10:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:11.227 10:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:11.227 10:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:11.227 10:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.227 10:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.227 10:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.227 10:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.227 10:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.227 10:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.227 10:28:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.227 10:28:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.227 10:28:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.227 10:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.227 "name": "raid_bdev1", 00:17:11.227 "uuid": "0716d9aa-f885-423a-9f2a-fbfc210e9cc0", 00:17:11.227 "strip_size_kb": 0, 00:17:11.227 "state": "online", 00:17:11.227 "raid_level": "raid1", 00:17:11.227 "superblock": true, 00:17:11.228 "num_base_bdevs": 2, 00:17:11.228 "num_base_bdevs_discovered": 1, 00:17:11.228 "num_base_bdevs_operational": 1, 00:17:11.228 "base_bdevs_list": [ 00:17:11.228 { 00:17:11.228 "name": null, 00:17:11.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.228 "is_configured": false, 00:17:11.228 "data_offset": 0, 00:17:11.228 "data_size": 7936 00:17:11.228 }, 00:17:11.228 { 00:17:11.228 "name": "BaseBdev2", 00:17:11.228 "uuid": "e75aa62a-af37-5536-b14a-5a5d9a214090", 00:17:11.228 "is_configured": true, 00:17:11.228 "data_offset": 256, 00:17:11.228 "data_size": 7936 00:17:11.228 } 00:17:11.228 ] 00:17:11.228 }' 00:17:11.228 10:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.228 10:28:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.228 10:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:11.228 10:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:11.228 10:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:11.228 10:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:11.228 10:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:11.228 10:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.228 10:28:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.228 10:28:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.228 10:28:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.228 10:28:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.488 10:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:11.488 "name": "raid_bdev1", 00:17:11.488 "uuid": "0716d9aa-f885-423a-9f2a-fbfc210e9cc0", 00:17:11.488 "strip_size_kb": 0, 00:17:11.488 "state": "online", 00:17:11.488 "raid_level": "raid1", 00:17:11.488 "superblock": true, 00:17:11.488 "num_base_bdevs": 2, 00:17:11.488 "num_base_bdevs_discovered": 1, 00:17:11.488 "num_base_bdevs_operational": 1, 00:17:11.488 "base_bdevs_list": [ 00:17:11.488 { 00:17:11.488 "name": null, 00:17:11.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.488 "is_configured": false, 00:17:11.488 "data_offset": 0, 00:17:11.488 "data_size": 7936 00:17:11.488 }, 00:17:11.488 { 00:17:11.488 "name": "BaseBdev2", 00:17:11.488 "uuid": "e75aa62a-af37-5536-b14a-5a5d9a214090", 00:17:11.488 "is_configured": true, 00:17:11.488 "data_offset": 256, 00:17:11.488 "data_size": 7936 00:17:11.488 } 00:17:11.488 ] 00:17:11.488 }' 00:17:11.488 10:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:11.488 10:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:11.488 10:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:11.488 10:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:11.488 10:28:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86173 00:17:11.488 10:28:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86173 ']' 00:17:11.488 10:28:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86173 00:17:11.488 10:28:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:11.488 10:28:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:11.488 10:28:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86173 00:17:11.488 10:28:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:11.488 10:28:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:11.488 10:28:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86173' 00:17:11.488 killing process with pid 86173 00:17:11.488 Received shutdown signal, test time was about 60.000000 seconds 00:17:11.488 00:17:11.488 Latency(us) 00:17:11.488 [2024-11-19T10:28:25.269Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.488 [2024-11-19T10:28:25.269Z] =================================================================================================================== 00:17:11.488 [2024-11-19T10:28:25.269Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:11.488 10:28:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86173 00:17:11.488 [2024-11-19 10:28:25.171779] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:11.488 [2024-11-19 10:28:25.171915] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:11.488 10:28:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86173 00:17:11.488 [2024-11-19 10:28:25.171967] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:11.488 [2024-11-19 10:28:25.171981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:11.748 [2024-11-19 10:28:25.479492] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:13.130 10:28:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:17:13.130 00:17:13.130 real 0m19.872s 00:17:13.130 user 0m25.842s 00:17:13.130 sys 0m2.730s 00:17:13.130 ************************************ 00:17:13.130 END TEST raid_rebuild_test_sb_4k 00:17:13.130 ************************************ 00:17:13.130 10:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:13.130 10:28:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:13.130 10:28:26 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:17:13.130 10:28:26 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:13.130 10:28:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:13.130 10:28:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:13.130 10:28:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:13.130 ************************************ 00:17:13.130 START TEST raid_state_function_test_sb_md_separate 00:17:13.130 ************************************ 00:17:13.130 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:13.130 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:13.130 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:13.130 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:13.130 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:13.130 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:13.130 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:13.130 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:13.130 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:13.130 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:13.130 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:13.130 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:13.130 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:13.130 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:13.130 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:13.130 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:13.130 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:13.130 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:13.130 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:13.130 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:13.130 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:13.130 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:13.130 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:13.130 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=86861 00:17:13.130 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:13.130 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86861' 00:17:13.130 Process raid pid: 86861 00:17:13.130 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 86861 00:17:13.130 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 86861 ']' 00:17:13.130 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.131 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:13.131 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.131 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:13.131 10:28:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:13.131 [2024-11-19 10:28:26.800979] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:17:13.131 [2024-11-19 10:28:26.801171] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:13.390 [2024-11-19 10:28:26.978273] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.390 [2024-11-19 10:28:27.084190] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.650 [2024-11-19 10:28:27.270494] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:13.650 [2024-11-19 10:28:27.270581] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:13.909 10:28:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:13.909 10:28:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:13.909 10:28:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:13.909 10:28:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.909 10:28:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:13.909 [2024-11-19 10:28:27.614432] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:13.909 [2024-11-19 10:28:27.614532] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:13.909 [2024-11-19 10:28:27.614562] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:13.909 [2024-11-19 10:28:27.614585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:13.909 10:28:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.909 10:28:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:13.909 10:28:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:13.909 10:28:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:13.909 10:28:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.909 10:28:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.909 10:28:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:13.909 10:28:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.909 10:28:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.909 10:28:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.909 10:28:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.909 10:28:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.910 10:28:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.910 10:28:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.910 10:28:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:13.910 10:28:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.910 10:28:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.910 "name": "Existed_Raid", 00:17:13.910 "uuid": "0252481d-9dfa-4bdd-b22d-caebb7056e9b", 00:17:13.910 "strip_size_kb": 0, 00:17:13.910 "state": "configuring", 00:17:13.910 "raid_level": "raid1", 00:17:13.910 "superblock": true, 00:17:13.910 "num_base_bdevs": 2, 00:17:13.910 "num_base_bdevs_discovered": 0, 00:17:13.910 "num_base_bdevs_operational": 2, 00:17:13.910 "base_bdevs_list": [ 00:17:13.910 { 00:17:13.910 "name": "BaseBdev1", 00:17:13.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.910 "is_configured": false, 00:17:13.910 "data_offset": 0, 00:17:13.910 "data_size": 0 00:17:13.910 }, 00:17:13.910 { 00:17:13.910 "name": "BaseBdev2", 00:17:13.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.910 "is_configured": false, 00:17:13.910 "data_offset": 0, 00:17:13.910 "data_size": 0 00:17:13.910 } 00:17:13.910 ] 00:17:13.910 }' 00:17:13.910 10:28:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.910 10:28:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.480 [2024-11-19 10:28:28.045622] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:14.480 [2024-11-19 10:28:28.045651] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.480 [2024-11-19 10:28:28.057598] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:14.480 [2024-11-19 10:28:28.057674] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:14.480 [2024-11-19 10:28:28.057702] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:14.480 [2024-11-19 10:28:28.057727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.480 [2024-11-19 10:28:28.099556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:14.480 BaseBdev1 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.480 [ 00:17:14.480 { 00:17:14.480 "name": "BaseBdev1", 00:17:14.480 "aliases": [ 00:17:14.480 "70c6cebe-56bc-4723-9ca9-0f94fe4dadd0" 00:17:14.480 ], 00:17:14.480 "product_name": "Malloc disk", 00:17:14.480 "block_size": 4096, 00:17:14.480 "num_blocks": 8192, 00:17:14.480 "uuid": "70c6cebe-56bc-4723-9ca9-0f94fe4dadd0", 00:17:14.480 "md_size": 32, 00:17:14.480 "md_interleave": false, 00:17:14.480 "dif_type": 0, 00:17:14.480 "assigned_rate_limits": { 00:17:14.480 "rw_ios_per_sec": 0, 00:17:14.480 "rw_mbytes_per_sec": 0, 00:17:14.480 "r_mbytes_per_sec": 0, 00:17:14.480 "w_mbytes_per_sec": 0 00:17:14.480 }, 00:17:14.480 "claimed": true, 00:17:14.480 "claim_type": "exclusive_write", 00:17:14.480 "zoned": false, 00:17:14.480 "supported_io_types": { 00:17:14.480 "read": true, 00:17:14.480 "write": true, 00:17:14.480 "unmap": true, 00:17:14.480 "flush": true, 00:17:14.480 "reset": true, 00:17:14.480 "nvme_admin": false, 00:17:14.480 "nvme_io": false, 00:17:14.480 "nvme_io_md": false, 00:17:14.480 "write_zeroes": true, 00:17:14.480 "zcopy": true, 00:17:14.480 "get_zone_info": false, 00:17:14.480 "zone_management": false, 00:17:14.480 "zone_append": false, 00:17:14.480 "compare": false, 00:17:14.480 "compare_and_write": false, 00:17:14.480 "abort": true, 00:17:14.480 "seek_hole": false, 00:17:14.480 "seek_data": false, 00:17:14.480 "copy": true, 00:17:14.480 "nvme_iov_md": false 00:17:14.480 }, 00:17:14.480 "memory_domains": [ 00:17:14.480 { 00:17:14.480 "dma_device_id": "system", 00:17:14.480 "dma_device_type": 1 00:17:14.480 }, 00:17:14.480 { 00:17:14.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.480 "dma_device_type": 2 00:17:14.480 } 00:17:14.480 ], 00:17:14.480 "driver_specific": {} 00:17:14.480 } 00:17:14.480 ] 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.480 "name": "Existed_Raid", 00:17:14.480 "uuid": "20187a1e-158e-4e30-860f-fbd342189d9b", 00:17:14.480 "strip_size_kb": 0, 00:17:14.480 "state": "configuring", 00:17:14.480 "raid_level": "raid1", 00:17:14.480 "superblock": true, 00:17:14.480 "num_base_bdevs": 2, 00:17:14.480 "num_base_bdevs_discovered": 1, 00:17:14.480 "num_base_bdevs_operational": 2, 00:17:14.480 "base_bdevs_list": [ 00:17:14.480 { 00:17:14.480 "name": "BaseBdev1", 00:17:14.480 "uuid": "70c6cebe-56bc-4723-9ca9-0f94fe4dadd0", 00:17:14.480 "is_configured": true, 00:17:14.480 "data_offset": 256, 00:17:14.480 "data_size": 7936 00:17:14.480 }, 00:17:14.480 { 00:17:14.480 "name": "BaseBdev2", 00:17:14.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.480 "is_configured": false, 00:17:14.480 "data_offset": 0, 00:17:14.480 "data_size": 0 00:17:14.480 } 00:17:14.480 ] 00:17:14.480 }' 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.480 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.051 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:15.051 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.051 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.051 [2024-11-19 10:28:28.566886] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:15.051 [2024-11-19 10:28:28.566973] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:15.051 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.051 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:15.051 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.051 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.051 [2024-11-19 10:28:28.578902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:15.051 [2024-11-19 10:28:28.580685] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:15.051 [2024-11-19 10:28:28.580765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:15.051 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.051 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:15.051 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:15.051 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:15.051 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:15.051 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:15.051 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.051 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.051 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:15.051 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.051 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.051 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.051 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.051 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.051 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.051 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.051 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.051 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.051 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.051 "name": "Existed_Raid", 00:17:15.051 "uuid": "faa38651-605f-45a2-a9ec-03ef13c13a04", 00:17:15.051 "strip_size_kb": 0, 00:17:15.051 "state": "configuring", 00:17:15.051 "raid_level": "raid1", 00:17:15.051 "superblock": true, 00:17:15.051 "num_base_bdevs": 2, 00:17:15.051 "num_base_bdevs_discovered": 1, 00:17:15.051 "num_base_bdevs_operational": 2, 00:17:15.051 "base_bdevs_list": [ 00:17:15.051 { 00:17:15.051 "name": "BaseBdev1", 00:17:15.051 "uuid": "70c6cebe-56bc-4723-9ca9-0f94fe4dadd0", 00:17:15.051 "is_configured": true, 00:17:15.051 "data_offset": 256, 00:17:15.051 "data_size": 7936 00:17:15.051 }, 00:17:15.051 { 00:17:15.051 "name": "BaseBdev2", 00:17:15.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.051 "is_configured": false, 00:17:15.051 "data_offset": 0, 00:17:15.051 "data_size": 0 00:17:15.052 } 00:17:15.052 ] 00:17:15.052 }' 00:17:15.052 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.052 10:28:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.312 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:15.312 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.312 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.312 [2024-11-19 10:28:29.068406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:15.312 [2024-11-19 10:28:29.068708] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:15.312 [2024-11-19 10:28:29.068759] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:15.312 [2024-11-19 10:28:29.068861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:15.312 [2024-11-19 10:28:29.069024] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:15.312 BaseBdev2 00:17:15.312 [2024-11-19 10:28:29.069066] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:15.312 [2024-11-19 10:28:29.069166] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.312 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.312 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:15.312 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:15.312 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:15.312 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:15.312 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:15.312 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:15.312 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:15.312 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.312 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.312 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.312 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:15.312 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.312 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.591 [ 00:17:15.591 { 00:17:15.591 "name": "BaseBdev2", 00:17:15.591 "aliases": [ 00:17:15.591 "09be6625-4d4e-49d9-97ba-1acecea5de3b" 00:17:15.591 ], 00:17:15.591 "product_name": "Malloc disk", 00:17:15.591 "block_size": 4096, 00:17:15.591 "num_blocks": 8192, 00:17:15.591 "uuid": "09be6625-4d4e-49d9-97ba-1acecea5de3b", 00:17:15.591 "md_size": 32, 00:17:15.591 "md_interleave": false, 00:17:15.591 "dif_type": 0, 00:17:15.591 "assigned_rate_limits": { 00:17:15.591 "rw_ios_per_sec": 0, 00:17:15.591 "rw_mbytes_per_sec": 0, 00:17:15.591 "r_mbytes_per_sec": 0, 00:17:15.591 "w_mbytes_per_sec": 0 00:17:15.591 }, 00:17:15.591 "claimed": true, 00:17:15.591 "claim_type": "exclusive_write", 00:17:15.591 "zoned": false, 00:17:15.591 "supported_io_types": { 00:17:15.591 "read": true, 00:17:15.591 "write": true, 00:17:15.591 "unmap": true, 00:17:15.591 "flush": true, 00:17:15.591 "reset": true, 00:17:15.591 "nvme_admin": false, 00:17:15.591 "nvme_io": false, 00:17:15.591 "nvme_io_md": false, 00:17:15.591 "write_zeroes": true, 00:17:15.591 "zcopy": true, 00:17:15.591 "get_zone_info": false, 00:17:15.591 "zone_management": false, 00:17:15.591 "zone_append": false, 00:17:15.591 "compare": false, 00:17:15.591 "compare_and_write": false, 00:17:15.591 "abort": true, 00:17:15.591 "seek_hole": false, 00:17:15.591 "seek_data": false, 00:17:15.591 "copy": true, 00:17:15.591 "nvme_iov_md": false 00:17:15.591 }, 00:17:15.591 "memory_domains": [ 00:17:15.591 { 00:17:15.591 "dma_device_id": "system", 00:17:15.591 "dma_device_type": 1 00:17:15.591 }, 00:17:15.591 { 00:17:15.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.591 "dma_device_type": 2 00:17:15.591 } 00:17:15.591 ], 00:17:15.591 "driver_specific": {} 00:17:15.591 } 00:17:15.591 ] 00:17:15.591 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.591 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:15.591 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:15.591 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:15.591 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:15.591 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:15.591 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.591 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:15.591 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:15.591 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:15.591 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.591 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.591 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.591 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.591 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.591 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.591 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.591 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.591 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.591 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.591 "name": "Existed_Raid", 00:17:15.591 "uuid": "faa38651-605f-45a2-a9ec-03ef13c13a04", 00:17:15.591 "strip_size_kb": 0, 00:17:15.591 "state": "online", 00:17:15.591 "raid_level": "raid1", 00:17:15.591 "superblock": true, 00:17:15.591 "num_base_bdevs": 2, 00:17:15.591 "num_base_bdevs_discovered": 2, 00:17:15.591 "num_base_bdevs_operational": 2, 00:17:15.591 "base_bdevs_list": [ 00:17:15.591 { 00:17:15.591 "name": "BaseBdev1", 00:17:15.591 "uuid": "70c6cebe-56bc-4723-9ca9-0f94fe4dadd0", 00:17:15.591 "is_configured": true, 00:17:15.591 "data_offset": 256, 00:17:15.591 "data_size": 7936 00:17:15.591 }, 00:17:15.591 { 00:17:15.591 "name": "BaseBdev2", 00:17:15.591 "uuid": "09be6625-4d4e-49d9-97ba-1acecea5de3b", 00:17:15.591 "is_configured": true, 00:17:15.591 "data_offset": 256, 00:17:15.591 "data_size": 7936 00:17:15.591 } 00:17:15.591 ] 00:17:15.591 }' 00:17:15.591 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.591 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.850 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:15.850 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:15.850 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:15.850 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:15.850 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:15.850 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:15.850 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:15.850 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.850 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.850 [2024-11-19 10:28:29.555868] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:15.850 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:15.850 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.850 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:15.850 "name": "Existed_Raid", 00:17:15.850 "aliases": [ 00:17:15.850 "faa38651-605f-45a2-a9ec-03ef13c13a04" 00:17:15.850 ], 00:17:15.850 "product_name": "Raid Volume", 00:17:15.850 "block_size": 4096, 00:17:15.850 "num_blocks": 7936, 00:17:15.850 "uuid": "faa38651-605f-45a2-a9ec-03ef13c13a04", 00:17:15.850 "md_size": 32, 00:17:15.850 "md_interleave": false, 00:17:15.850 "dif_type": 0, 00:17:15.850 "assigned_rate_limits": { 00:17:15.850 "rw_ios_per_sec": 0, 00:17:15.850 "rw_mbytes_per_sec": 0, 00:17:15.850 "r_mbytes_per_sec": 0, 00:17:15.850 "w_mbytes_per_sec": 0 00:17:15.850 }, 00:17:15.850 "claimed": false, 00:17:15.850 "zoned": false, 00:17:15.850 "supported_io_types": { 00:17:15.850 "read": true, 00:17:15.850 "write": true, 00:17:15.850 "unmap": false, 00:17:15.850 "flush": false, 00:17:15.850 "reset": true, 00:17:15.850 "nvme_admin": false, 00:17:15.850 "nvme_io": false, 00:17:15.850 "nvme_io_md": false, 00:17:15.850 "write_zeroes": true, 00:17:15.850 "zcopy": false, 00:17:15.850 "get_zone_info": false, 00:17:15.850 "zone_management": false, 00:17:15.850 "zone_append": false, 00:17:15.850 "compare": false, 00:17:15.850 "compare_and_write": false, 00:17:15.850 "abort": false, 00:17:15.850 "seek_hole": false, 00:17:15.850 "seek_data": false, 00:17:15.850 "copy": false, 00:17:15.850 "nvme_iov_md": false 00:17:15.851 }, 00:17:15.851 "memory_domains": [ 00:17:15.851 { 00:17:15.851 "dma_device_id": "system", 00:17:15.851 "dma_device_type": 1 00:17:15.851 }, 00:17:15.851 { 00:17:15.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.851 "dma_device_type": 2 00:17:15.851 }, 00:17:15.851 { 00:17:15.851 "dma_device_id": "system", 00:17:15.851 "dma_device_type": 1 00:17:15.851 }, 00:17:15.851 { 00:17:15.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.851 "dma_device_type": 2 00:17:15.851 } 00:17:15.851 ], 00:17:15.851 "driver_specific": { 00:17:15.851 "raid": { 00:17:15.851 "uuid": "faa38651-605f-45a2-a9ec-03ef13c13a04", 00:17:15.851 "strip_size_kb": 0, 00:17:15.851 "state": "online", 00:17:15.851 "raid_level": "raid1", 00:17:15.851 "superblock": true, 00:17:15.851 "num_base_bdevs": 2, 00:17:15.851 "num_base_bdevs_discovered": 2, 00:17:15.851 "num_base_bdevs_operational": 2, 00:17:15.851 "base_bdevs_list": [ 00:17:15.851 { 00:17:15.851 "name": "BaseBdev1", 00:17:15.851 "uuid": "70c6cebe-56bc-4723-9ca9-0f94fe4dadd0", 00:17:15.851 "is_configured": true, 00:17:15.851 "data_offset": 256, 00:17:15.851 "data_size": 7936 00:17:15.851 }, 00:17:15.851 { 00:17:15.851 "name": "BaseBdev2", 00:17:15.851 "uuid": "09be6625-4d4e-49d9-97ba-1acecea5de3b", 00:17:15.851 "is_configured": true, 00:17:15.851 "data_offset": 256, 00:17:15.851 "data_size": 7936 00:17:15.851 } 00:17:15.851 ] 00:17:15.851 } 00:17:15.851 } 00:17:15.851 }' 00:17:15.851 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:16.111 BaseBdev2' 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.111 [2024-11-19 10:28:29.775283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.111 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.371 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.371 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.371 "name": "Existed_Raid", 00:17:16.371 "uuid": "faa38651-605f-45a2-a9ec-03ef13c13a04", 00:17:16.371 "strip_size_kb": 0, 00:17:16.371 "state": "online", 00:17:16.371 "raid_level": "raid1", 00:17:16.371 "superblock": true, 00:17:16.371 "num_base_bdevs": 2, 00:17:16.371 "num_base_bdevs_discovered": 1, 00:17:16.371 "num_base_bdevs_operational": 1, 00:17:16.371 "base_bdevs_list": [ 00:17:16.371 { 00:17:16.371 "name": null, 00:17:16.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.371 "is_configured": false, 00:17:16.371 "data_offset": 0, 00:17:16.371 "data_size": 7936 00:17:16.371 }, 00:17:16.371 { 00:17:16.371 "name": "BaseBdev2", 00:17:16.371 "uuid": "09be6625-4d4e-49d9-97ba-1acecea5de3b", 00:17:16.371 "is_configured": true, 00:17:16.371 "data_offset": 256, 00:17:16.371 "data_size": 7936 00:17:16.371 } 00:17:16.371 ] 00:17:16.371 }' 00:17:16.371 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.371 10:28:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.632 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:16.632 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:16.632 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.632 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:16.632 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.632 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.632 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.632 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:16.632 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:16.632 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:16.632 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.632 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.632 [2024-11-19 10:28:30.368659] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:16.632 [2024-11-19 10:28:30.368830] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:16.892 [2024-11-19 10:28:30.463912] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:16.892 [2024-11-19 10:28:30.463963] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:16.892 [2024-11-19 10:28:30.463975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:16.892 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.892 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:16.892 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:16.892 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.892 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:16.892 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.892 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.892 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.892 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:16.892 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:16.892 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:16.892 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 86861 00:17:16.892 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 86861 ']' 00:17:16.892 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 86861 00:17:16.892 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:16.892 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:16.892 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86861 00:17:16.892 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:16.892 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:16.892 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86861' 00:17:16.892 killing process with pid 86861 00:17:16.892 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 86861 00:17:16.892 [2024-11-19 10:28:30.548963] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:16.892 10:28:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 86861 00:17:16.892 [2024-11-19 10:28:30.564147] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:17.832 ************************************ 00:17:17.832 END TEST raid_state_function_test_sb_md_separate 00:17:17.832 ************************************ 00:17:17.832 10:28:31 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:17:17.832 00:17:17.832 real 0m4.909s 00:17:17.832 user 0m7.060s 00:17:17.832 sys 0m0.883s 00:17:17.832 10:28:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:17.832 10:28:31 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.093 10:28:31 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:18.093 10:28:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:18.093 10:28:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:18.093 10:28:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:18.093 ************************************ 00:17:18.093 START TEST raid_superblock_test_md_separate 00:17:18.093 ************************************ 00:17:18.093 10:28:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:18.093 10:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:18.093 10:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:18.093 10:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:18.093 10:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:18.093 10:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:18.093 10:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:18.093 10:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:18.093 10:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:18.093 10:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:18.093 10:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:18.094 10:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:18.094 10:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:18.094 10:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:18.094 10:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:18.094 10:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:18.094 10:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87109 00:17:18.094 10:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:18.094 10:28:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87109 00:17:18.094 10:28:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87109 ']' 00:17:18.094 10:28:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.094 10:28:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:18.094 10:28:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.094 10:28:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:18.094 10:28:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.094 [2024-11-19 10:28:31.790574] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:17:18.094 [2024-11-19 10:28:31.790782] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87109 ] 00:17:18.354 [2024-11-19 10:28:31.970732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.354 [2024-11-19 10:28:32.075216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.615 [2024-11-19 10:28:32.285186] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.615 [2024-11-19 10:28:32.285314] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.876 10:28:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:18.876 10:28:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:18.876 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:18.876 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:18.876 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:18.876 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:18.876 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:18.876 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:18.876 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:18.876 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:18.876 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:18.876 10:28:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.876 10:28:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.876 malloc1 00:17:18.876 10:28:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.876 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:18.876 10:28:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.876 10:28:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.876 [2024-11-19 10:28:32.635359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:18.876 [2024-11-19 10:28:32.635491] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.876 [2024-11-19 10:28:32.635527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:18.876 [2024-11-19 10:28:32.635555] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.876 [2024-11-19 10:28:32.637367] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.876 [2024-11-19 10:28:32.637435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:18.876 pt1 00:17:18.876 10:28:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.876 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:18.876 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:18.876 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:18.876 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:18.876 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:18.876 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:18.876 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:18.876 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:18.876 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:18.876 10:28:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.876 10:28:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.137 malloc2 00:17:19.137 10:28:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.137 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:19.137 10:28:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.137 10:28:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.137 [2024-11-19 10:28:32.695592] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:19.137 [2024-11-19 10:28:32.695696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.137 [2024-11-19 10:28:32.695732] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:19.137 [2024-11-19 10:28:32.695758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.137 [2024-11-19 10:28:32.697507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.137 [2024-11-19 10:28:32.697571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:19.137 pt2 00:17:19.137 10:28:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.137 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:19.137 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:19.137 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:19.137 10:28:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.137 10:28:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.137 [2024-11-19 10:28:32.707599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:19.137 [2024-11-19 10:28:32.709306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:19.137 [2024-11-19 10:28:32.709515] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:19.137 [2024-11-19 10:28:32.709569] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:19.137 [2024-11-19 10:28:32.709660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:19.137 [2024-11-19 10:28:32.709811] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:19.137 [2024-11-19 10:28:32.709851] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:19.137 [2024-11-19 10:28:32.709982] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.137 10:28:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.137 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:19.137 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.137 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.137 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.137 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.137 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:19.137 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.137 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.137 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.137 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.137 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.137 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.137 10:28:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.137 10:28:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.137 10:28:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.137 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.137 "name": "raid_bdev1", 00:17:19.137 "uuid": "2faff4cd-e199-47f6-8181-5a969ea13900", 00:17:19.137 "strip_size_kb": 0, 00:17:19.137 "state": "online", 00:17:19.137 "raid_level": "raid1", 00:17:19.137 "superblock": true, 00:17:19.137 "num_base_bdevs": 2, 00:17:19.137 "num_base_bdevs_discovered": 2, 00:17:19.137 "num_base_bdevs_operational": 2, 00:17:19.137 "base_bdevs_list": [ 00:17:19.137 { 00:17:19.137 "name": "pt1", 00:17:19.137 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:19.137 "is_configured": true, 00:17:19.137 "data_offset": 256, 00:17:19.137 "data_size": 7936 00:17:19.137 }, 00:17:19.137 { 00:17:19.137 "name": "pt2", 00:17:19.137 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.137 "is_configured": true, 00:17:19.137 "data_offset": 256, 00:17:19.137 "data_size": 7936 00:17:19.137 } 00:17:19.137 ] 00:17:19.137 }' 00:17:19.137 10:28:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.137 10:28:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.729 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:19.730 [2024-11-19 10:28:33.202976] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:19.730 "name": "raid_bdev1", 00:17:19.730 "aliases": [ 00:17:19.730 "2faff4cd-e199-47f6-8181-5a969ea13900" 00:17:19.730 ], 00:17:19.730 "product_name": "Raid Volume", 00:17:19.730 "block_size": 4096, 00:17:19.730 "num_blocks": 7936, 00:17:19.730 "uuid": "2faff4cd-e199-47f6-8181-5a969ea13900", 00:17:19.730 "md_size": 32, 00:17:19.730 "md_interleave": false, 00:17:19.730 "dif_type": 0, 00:17:19.730 "assigned_rate_limits": { 00:17:19.730 "rw_ios_per_sec": 0, 00:17:19.730 "rw_mbytes_per_sec": 0, 00:17:19.730 "r_mbytes_per_sec": 0, 00:17:19.730 "w_mbytes_per_sec": 0 00:17:19.730 }, 00:17:19.730 "claimed": false, 00:17:19.730 "zoned": false, 00:17:19.730 "supported_io_types": { 00:17:19.730 "read": true, 00:17:19.730 "write": true, 00:17:19.730 "unmap": false, 00:17:19.730 "flush": false, 00:17:19.730 "reset": true, 00:17:19.730 "nvme_admin": false, 00:17:19.730 "nvme_io": false, 00:17:19.730 "nvme_io_md": false, 00:17:19.730 "write_zeroes": true, 00:17:19.730 "zcopy": false, 00:17:19.730 "get_zone_info": false, 00:17:19.730 "zone_management": false, 00:17:19.730 "zone_append": false, 00:17:19.730 "compare": false, 00:17:19.730 "compare_and_write": false, 00:17:19.730 "abort": false, 00:17:19.730 "seek_hole": false, 00:17:19.730 "seek_data": false, 00:17:19.730 "copy": false, 00:17:19.730 "nvme_iov_md": false 00:17:19.730 }, 00:17:19.730 "memory_domains": [ 00:17:19.730 { 00:17:19.730 "dma_device_id": "system", 00:17:19.730 "dma_device_type": 1 00:17:19.730 }, 00:17:19.730 { 00:17:19.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.730 "dma_device_type": 2 00:17:19.730 }, 00:17:19.730 { 00:17:19.730 "dma_device_id": "system", 00:17:19.730 "dma_device_type": 1 00:17:19.730 }, 00:17:19.730 { 00:17:19.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.730 "dma_device_type": 2 00:17:19.730 } 00:17:19.730 ], 00:17:19.730 "driver_specific": { 00:17:19.730 "raid": { 00:17:19.730 "uuid": "2faff4cd-e199-47f6-8181-5a969ea13900", 00:17:19.730 "strip_size_kb": 0, 00:17:19.730 "state": "online", 00:17:19.730 "raid_level": "raid1", 00:17:19.730 "superblock": true, 00:17:19.730 "num_base_bdevs": 2, 00:17:19.730 "num_base_bdevs_discovered": 2, 00:17:19.730 "num_base_bdevs_operational": 2, 00:17:19.730 "base_bdevs_list": [ 00:17:19.730 { 00:17:19.730 "name": "pt1", 00:17:19.730 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:19.730 "is_configured": true, 00:17:19.730 "data_offset": 256, 00:17:19.730 "data_size": 7936 00:17:19.730 }, 00:17:19.730 { 00:17:19.730 "name": "pt2", 00:17:19.730 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.730 "is_configured": true, 00:17:19.730 "data_offset": 256, 00:17:19.730 "data_size": 7936 00:17:19.730 } 00:17:19.730 ] 00:17:19.730 } 00:17:19.730 } 00:17:19.730 }' 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:19.730 pt2' 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:19.730 [2024-11-19 10:28:33.414572] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2faff4cd-e199-47f6-8181-5a969ea13900 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 2faff4cd-e199-47f6-8181-5a969ea13900 ']' 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.730 [2024-11-19 10:28:33.458269] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:19.730 [2024-11-19 10:28:33.458332] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:19.730 [2024-11-19 10:28:33.458416] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:19.730 [2024-11-19 10:28:33.458477] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:19.730 [2024-11-19 10:28:33.458510] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.730 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.991 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:19.991 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:19.991 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:19.991 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:19.991 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.991 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.991 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.991 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:19.991 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:19.991 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.991 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.991 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.991 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:19.991 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.991 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:19.991 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.991 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.991 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:19.991 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:19.991 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:19.991 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:19.991 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:19.991 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.991 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:19.991 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:19.991 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.992 [2024-11-19 10:28:33.602059] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:19.992 [2024-11-19 10:28:33.603686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:19.992 [2024-11-19 10:28:33.603756] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:19.992 [2024-11-19 10:28:33.603801] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:19.992 [2024-11-19 10:28:33.603815] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:19.992 [2024-11-19 10:28:33.603824] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:19.992 request: 00:17:19.992 { 00:17:19.992 "name": "raid_bdev1", 00:17:19.992 "raid_level": "raid1", 00:17:19.992 "base_bdevs": [ 00:17:19.992 "malloc1", 00:17:19.992 "malloc2" 00:17:19.992 ], 00:17:19.992 "superblock": false, 00:17:19.992 "method": "bdev_raid_create", 00:17:19.992 "req_id": 1 00:17:19.992 } 00:17:19.992 Got JSON-RPC error response 00:17:19.992 response: 00:17:19.992 { 00:17:19.992 "code": -17, 00:17:19.992 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:19.992 } 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.992 [2024-11-19 10:28:33.649956] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:19.992 [2024-11-19 10:28:33.650054] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.992 [2024-11-19 10:28:33.650071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:19.992 [2024-11-19 10:28:33.650081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.992 [2024-11-19 10:28:33.651825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.992 [2024-11-19 10:28:33.651865] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:19.992 [2024-11-19 10:28:33.651901] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:19.992 [2024-11-19 10:28:33.651954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:19.992 pt1 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.992 "name": "raid_bdev1", 00:17:19.992 "uuid": "2faff4cd-e199-47f6-8181-5a969ea13900", 00:17:19.992 "strip_size_kb": 0, 00:17:19.992 "state": "configuring", 00:17:19.992 "raid_level": "raid1", 00:17:19.992 "superblock": true, 00:17:19.992 "num_base_bdevs": 2, 00:17:19.992 "num_base_bdevs_discovered": 1, 00:17:19.992 "num_base_bdevs_operational": 2, 00:17:19.992 "base_bdevs_list": [ 00:17:19.992 { 00:17:19.992 "name": "pt1", 00:17:19.992 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:19.992 "is_configured": true, 00:17:19.992 "data_offset": 256, 00:17:19.992 "data_size": 7936 00:17:19.992 }, 00:17:19.992 { 00:17:19.992 "name": null, 00:17:19.992 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.992 "is_configured": false, 00:17:19.992 "data_offset": 256, 00:17:19.992 "data_size": 7936 00:17:19.992 } 00:17:19.992 ] 00:17:19.992 }' 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.992 10:28:33 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.561 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:20.561 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:20.561 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:20.561 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:20.561 10:28:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.561 10:28:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.561 [2024-11-19 10:28:34.089178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:20.561 [2024-11-19 10:28:34.089237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.561 [2024-11-19 10:28:34.089253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:20.561 [2024-11-19 10:28:34.089263] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.561 [2024-11-19 10:28:34.089415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.561 [2024-11-19 10:28:34.089429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:20.561 [2024-11-19 10:28:34.089461] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:20.561 [2024-11-19 10:28:34.089480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:20.561 [2024-11-19 10:28:34.089579] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:20.561 [2024-11-19 10:28:34.089589] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:20.561 [2024-11-19 10:28:34.089644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:20.562 [2024-11-19 10:28:34.089739] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:20.562 [2024-11-19 10:28:34.089746] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:20.562 [2024-11-19 10:28:34.089831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.562 pt2 00:17:20.562 10:28:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.562 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:20.562 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:20.562 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:20.562 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.562 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.562 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.562 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.562 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:20.562 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.562 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.562 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.562 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.562 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.562 10:28:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.562 10:28:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.562 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.562 10:28:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.562 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.562 "name": "raid_bdev1", 00:17:20.562 "uuid": "2faff4cd-e199-47f6-8181-5a969ea13900", 00:17:20.562 "strip_size_kb": 0, 00:17:20.562 "state": "online", 00:17:20.562 "raid_level": "raid1", 00:17:20.562 "superblock": true, 00:17:20.562 "num_base_bdevs": 2, 00:17:20.562 "num_base_bdevs_discovered": 2, 00:17:20.562 "num_base_bdevs_operational": 2, 00:17:20.562 "base_bdevs_list": [ 00:17:20.562 { 00:17:20.562 "name": "pt1", 00:17:20.562 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:20.562 "is_configured": true, 00:17:20.562 "data_offset": 256, 00:17:20.562 "data_size": 7936 00:17:20.562 }, 00:17:20.562 { 00:17:20.562 "name": "pt2", 00:17:20.562 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:20.562 "is_configured": true, 00:17:20.562 "data_offset": 256, 00:17:20.562 "data_size": 7936 00:17:20.562 } 00:17:20.562 ] 00:17:20.562 }' 00:17:20.562 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.562 10:28:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.821 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:20.821 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:20.821 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:20.821 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:20.821 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:20.821 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:20.821 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:20.821 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:20.821 10:28:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.821 10:28:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.821 [2024-11-19 10:28:34.516655] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:20.821 10:28:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.821 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:20.821 "name": "raid_bdev1", 00:17:20.821 "aliases": [ 00:17:20.821 "2faff4cd-e199-47f6-8181-5a969ea13900" 00:17:20.821 ], 00:17:20.821 "product_name": "Raid Volume", 00:17:20.821 "block_size": 4096, 00:17:20.821 "num_blocks": 7936, 00:17:20.821 "uuid": "2faff4cd-e199-47f6-8181-5a969ea13900", 00:17:20.821 "md_size": 32, 00:17:20.821 "md_interleave": false, 00:17:20.821 "dif_type": 0, 00:17:20.821 "assigned_rate_limits": { 00:17:20.821 "rw_ios_per_sec": 0, 00:17:20.821 "rw_mbytes_per_sec": 0, 00:17:20.821 "r_mbytes_per_sec": 0, 00:17:20.821 "w_mbytes_per_sec": 0 00:17:20.821 }, 00:17:20.821 "claimed": false, 00:17:20.821 "zoned": false, 00:17:20.821 "supported_io_types": { 00:17:20.821 "read": true, 00:17:20.821 "write": true, 00:17:20.821 "unmap": false, 00:17:20.821 "flush": false, 00:17:20.821 "reset": true, 00:17:20.821 "nvme_admin": false, 00:17:20.821 "nvme_io": false, 00:17:20.821 "nvme_io_md": false, 00:17:20.821 "write_zeroes": true, 00:17:20.821 "zcopy": false, 00:17:20.821 "get_zone_info": false, 00:17:20.821 "zone_management": false, 00:17:20.821 "zone_append": false, 00:17:20.821 "compare": false, 00:17:20.821 "compare_and_write": false, 00:17:20.821 "abort": false, 00:17:20.821 "seek_hole": false, 00:17:20.821 "seek_data": false, 00:17:20.821 "copy": false, 00:17:20.821 "nvme_iov_md": false 00:17:20.821 }, 00:17:20.821 "memory_domains": [ 00:17:20.821 { 00:17:20.821 "dma_device_id": "system", 00:17:20.821 "dma_device_type": 1 00:17:20.821 }, 00:17:20.821 { 00:17:20.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.821 "dma_device_type": 2 00:17:20.821 }, 00:17:20.821 { 00:17:20.821 "dma_device_id": "system", 00:17:20.821 "dma_device_type": 1 00:17:20.821 }, 00:17:20.821 { 00:17:20.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.821 "dma_device_type": 2 00:17:20.821 } 00:17:20.821 ], 00:17:20.821 "driver_specific": { 00:17:20.821 "raid": { 00:17:20.821 "uuid": "2faff4cd-e199-47f6-8181-5a969ea13900", 00:17:20.821 "strip_size_kb": 0, 00:17:20.821 "state": "online", 00:17:20.821 "raid_level": "raid1", 00:17:20.821 "superblock": true, 00:17:20.821 "num_base_bdevs": 2, 00:17:20.822 "num_base_bdevs_discovered": 2, 00:17:20.822 "num_base_bdevs_operational": 2, 00:17:20.822 "base_bdevs_list": [ 00:17:20.822 { 00:17:20.822 "name": "pt1", 00:17:20.822 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:20.822 "is_configured": true, 00:17:20.822 "data_offset": 256, 00:17:20.822 "data_size": 7936 00:17:20.822 }, 00:17:20.822 { 00:17:20.822 "name": "pt2", 00:17:20.822 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:20.822 "is_configured": true, 00:17:20.822 "data_offset": 256, 00:17:20.822 "data_size": 7936 00:17:20.822 } 00:17:20.822 ] 00:17:20.822 } 00:17:20.822 } 00:17:20.822 }' 00:17:20.822 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:21.081 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:21.081 pt2' 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.082 [2024-11-19 10:28:34.772314] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 2faff4cd-e199-47f6-8181-5a969ea13900 '!=' 2faff4cd-e199-47f6-8181-5a969ea13900 ']' 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.082 [2024-11-19 10:28:34.816020] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.082 10:28:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.342 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.342 "name": "raid_bdev1", 00:17:21.342 "uuid": "2faff4cd-e199-47f6-8181-5a969ea13900", 00:17:21.342 "strip_size_kb": 0, 00:17:21.342 "state": "online", 00:17:21.342 "raid_level": "raid1", 00:17:21.342 "superblock": true, 00:17:21.342 "num_base_bdevs": 2, 00:17:21.342 "num_base_bdevs_discovered": 1, 00:17:21.342 "num_base_bdevs_operational": 1, 00:17:21.342 "base_bdevs_list": [ 00:17:21.342 { 00:17:21.342 "name": null, 00:17:21.342 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.342 "is_configured": false, 00:17:21.342 "data_offset": 0, 00:17:21.342 "data_size": 7936 00:17:21.342 }, 00:17:21.342 { 00:17:21.342 "name": "pt2", 00:17:21.342 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:21.342 "is_configured": true, 00:17:21.342 "data_offset": 256, 00:17:21.342 "data_size": 7936 00:17:21.342 } 00:17:21.342 ] 00:17:21.342 }' 00:17:21.342 10:28:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.342 10:28:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.602 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:21.602 10:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.602 10:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.602 [2024-11-19 10:28:35.283189] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:21.602 [2024-11-19 10:28:35.283213] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:21.602 [2024-11-19 10:28:35.283263] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.602 [2024-11-19 10:28:35.283297] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:21.602 [2024-11-19 10:28:35.283306] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:21.602 10:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.602 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.602 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:21.602 10:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.602 10:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.602 10:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.602 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:21.602 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:21.602 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:21.602 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:21.602 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:21.602 10:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.602 10:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.602 10:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.602 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:21.602 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:21.602 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:21.602 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:21.602 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:17:21.602 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:21.602 10:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.602 10:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.602 [2024-11-19 10:28:35.355088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:21.602 [2024-11-19 10:28:35.355138] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.602 [2024-11-19 10:28:35.355153] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:21.602 [2024-11-19 10:28:35.355162] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.603 [2024-11-19 10:28:35.357035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.603 [2024-11-19 10:28:35.357073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:21.603 [2024-11-19 10:28:35.357109] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:21.603 [2024-11-19 10:28:35.357159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:21.603 [2024-11-19 10:28:35.357235] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:21.603 [2024-11-19 10:28:35.357246] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:21.603 [2024-11-19 10:28:35.357310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:21.603 [2024-11-19 10:28:35.357412] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:21.603 [2024-11-19 10:28:35.357420] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:21.603 [2024-11-19 10:28:35.357510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.603 pt2 00:17:21.603 10:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.603 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:21.603 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.603 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.603 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:21.603 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:21.603 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:21.603 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.603 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.603 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.603 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.603 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.603 10:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.603 10:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.603 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.603 10:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.862 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.862 "name": "raid_bdev1", 00:17:21.862 "uuid": "2faff4cd-e199-47f6-8181-5a969ea13900", 00:17:21.862 "strip_size_kb": 0, 00:17:21.862 "state": "online", 00:17:21.862 "raid_level": "raid1", 00:17:21.862 "superblock": true, 00:17:21.862 "num_base_bdevs": 2, 00:17:21.862 "num_base_bdevs_discovered": 1, 00:17:21.862 "num_base_bdevs_operational": 1, 00:17:21.862 "base_bdevs_list": [ 00:17:21.862 { 00:17:21.862 "name": null, 00:17:21.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.862 "is_configured": false, 00:17:21.862 "data_offset": 256, 00:17:21.862 "data_size": 7936 00:17:21.862 }, 00:17:21.862 { 00:17:21.862 "name": "pt2", 00:17:21.862 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:21.862 "is_configured": true, 00:17:21.862 "data_offset": 256, 00:17:21.862 "data_size": 7936 00:17:21.862 } 00:17:21.862 ] 00:17:21.862 }' 00:17:21.862 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.862 10:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.123 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:22.123 10:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.123 10:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.123 [2024-11-19 10:28:35.810257] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:22.123 [2024-11-19 10:28:35.810280] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:22.123 [2024-11-19 10:28:35.810321] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:22.123 [2024-11-19 10:28:35.810356] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:22.123 [2024-11-19 10:28:35.810364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:22.123 10:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.123 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.123 10:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.123 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:22.123 10:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.123 10:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.123 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:22.123 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:22.123 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:22.123 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:22.123 10:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.123 10:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.123 [2024-11-19 10:28:35.874197] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:22.123 [2024-11-19 10:28:35.874280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.123 [2024-11-19 10:28:35.874311] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:22.123 [2024-11-19 10:28:35.874338] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.123 [2024-11-19 10:28:35.876135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.123 [2024-11-19 10:28:35.876198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:22.123 [2024-11-19 10:28:35.876256] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:22.123 [2024-11-19 10:28:35.876315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:22.123 [2024-11-19 10:28:35.876466] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:22.123 [2024-11-19 10:28:35.876513] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:22.123 [2024-11-19 10:28:35.876551] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:22.123 [2024-11-19 10:28:35.876663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:22.123 [2024-11-19 10:28:35.876751] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:22.123 [2024-11-19 10:28:35.876762] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:22.123 [2024-11-19 10:28:35.876825] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:22.123 [2024-11-19 10:28:35.876919] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:22.123 [2024-11-19 10:28:35.876928] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:22.123 [2024-11-19 10:28:35.877037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.123 pt1 00:17:22.123 10:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.123 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:22.123 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:22.123 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.123 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.123 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:22.123 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:22.123 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:22.123 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.123 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.123 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.123 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.123 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.123 10:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.123 10:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.123 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.384 10:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.384 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.384 "name": "raid_bdev1", 00:17:22.384 "uuid": "2faff4cd-e199-47f6-8181-5a969ea13900", 00:17:22.384 "strip_size_kb": 0, 00:17:22.384 "state": "online", 00:17:22.384 "raid_level": "raid1", 00:17:22.384 "superblock": true, 00:17:22.384 "num_base_bdevs": 2, 00:17:22.384 "num_base_bdevs_discovered": 1, 00:17:22.384 "num_base_bdevs_operational": 1, 00:17:22.384 "base_bdevs_list": [ 00:17:22.384 { 00:17:22.384 "name": null, 00:17:22.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.384 "is_configured": false, 00:17:22.384 "data_offset": 256, 00:17:22.384 "data_size": 7936 00:17:22.384 }, 00:17:22.384 { 00:17:22.384 "name": "pt2", 00:17:22.384 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:22.384 "is_configured": true, 00:17:22.384 "data_offset": 256, 00:17:22.384 "data_size": 7936 00:17:22.384 } 00:17:22.384 ] 00:17:22.384 }' 00:17:22.384 10:28:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.384 10:28:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.644 10:28:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:22.644 10:28:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:22.644 10:28:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.644 10:28:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.644 10:28:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.644 10:28:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:22.644 10:28:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:22.644 10:28:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:22.644 10:28:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.644 10:28:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.644 [2024-11-19 10:28:36.417406] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:22.905 10:28:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.905 10:28:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 2faff4cd-e199-47f6-8181-5a969ea13900 '!=' 2faff4cd-e199-47f6-8181-5a969ea13900 ']' 00:17:22.905 10:28:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87109 00:17:22.905 10:28:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87109 ']' 00:17:22.905 10:28:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87109 00:17:22.905 10:28:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:22.905 10:28:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:22.905 10:28:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87109 00:17:22.905 10:28:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:22.905 10:28:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:22.905 10:28:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87109' 00:17:22.905 killing process with pid 87109 00:17:22.905 10:28:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87109 00:17:22.905 [2024-11-19 10:28:36.483815] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:22.905 [2024-11-19 10:28:36.483872] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:22.905 [2024-11-19 10:28:36.483902] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:22.905 [2024-11-19 10:28:36.483916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:22.905 10:28:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87109 00:17:23.165 [2024-11-19 10:28:36.688161] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:24.105 10:28:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:17:24.105 00:17:24.105 real 0m6.032s 00:17:24.105 user 0m9.170s 00:17:24.105 sys 0m1.130s 00:17:24.106 10:28:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:24.106 10:28:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.106 ************************************ 00:17:24.106 END TEST raid_superblock_test_md_separate 00:17:24.106 ************************************ 00:17:24.106 10:28:37 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:17:24.106 10:28:37 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:17:24.106 10:28:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:24.106 10:28:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:24.106 10:28:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:24.106 ************************************ 00:17:24.106 START TEST raid_rebuild_test_sb_md_separate 00:17:24.106 ************************************ 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87436 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87436 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87436 ']' 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:24.106 10:28:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.366 [2024-11-19 10:28:37.902101] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:17:24.366 [2024-11-19 10:28:37.902298] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:24.366 Zero copy mechanism will not be used. 00:17:24.366 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87436 ] 00:17:24.366 [2024-11-19 10:28:38.073792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.627 [2024-11-19 10:28:38.178293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.627 [2024-11-19 10:28:38.357805] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:24.627 [2024-11-19 10:28:38.357849] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:25.196 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:25.196 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:25.196 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:25.196 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:17:25.196 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.196 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.196 BaseBdev1_malloc 00:17:25.196 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.196 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:25.196 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.196 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.196 [2024-11-19 10:28:38.763266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:25.196 [2024-11-19 10:28:38.763327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.196 [2024-11-19 10:28:38.763356] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:25.196 [2024-11-19 10:28:38.763368] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.196 [2024-11-19 10:28:38.765312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.196 [2024-11-19 10:28:38.765350] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:25.196 BaseBdev1 00:17:25.196 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.196 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:25.196 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:17:25.196 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.196 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.196 BaseBdev2_malloc 00:17:25.196 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.196 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:25.196 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.196 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.197 [2024-11-19 10:28:38.817566] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:25.197 [2024-11-19 10:28:38.817622] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.197 [2024-11-19 10:28:38.817639] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:25.197 [2024-11-19 10:28:38.817649] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.197 [2024-11-19 10:28:38.819403] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.197 [2024-11-19 10:28:38.819439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:25.197 BaseBdev2 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.197 spare_malloc 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.197 spare_delay 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.197 [2024-11-19 10:28:38.913583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:25.197 [2024-11-19 10:28:38.913639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.197 [2024-11-19 10:28:38.913658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:25.197 [2024-11-19 10:28:38.913668] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.197 [2024-11-19 10:28:38.915404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.197 [2024-11-19 10:28:38.915537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:25.197 spare 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.197 [2024-11-19 10:28:38.925597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:25.197 [2024-11-19 10:28:38.927299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:25.197 [2024-11-19 10:28:38.927480] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:25.197 [2024-11-19 10:28:38.927495] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:25.197 [2024-11-19 10:28:38.927558] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:25.197 [2024-11-19 10:28:38.927680] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:25.197 [2024-11-19 10:28:38.927688] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:25.197 [2024-11-19 10:28:38.927785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.197 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.457 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.457 "name": "raid_bdev1", 00:17:25.457 "uuid": "3cc1a5c6-1e46-4192-9882-58b4cf871c0b", 00:17:25.457 "strip_size_kb": 0, 00:17:25.457 "state": "online", 00:17:25.457 "raid_level": "raid1", 00:17:25.457 "superblock": true, 00:17:25.457 "num_base_bdevs": 2, 00:17:25.457 "num_base_bdevs_discovered": 2, 00:17:25.457 "num_base_bdevs_operational": 2, 00:17:25.457 "base_bdevs_list": [ 00:17:25.457 { 00:17:25.457 "name": "BaseBdev1", 00:17:25.457 "uuid": "a455830b-a7ab-5232-a5ad-be910c1c0d86", 00:17:25.457 "is_configured": true, 00:17:25.457 "data_offset": 256, 00:17:25.457 "data_size": 7936 00:17:25.457 }, 00:17:25.457 { 00:17:25.457 "name": "BaseBdev2", 00:17:25.457 "uuid": "2a9bf825-1a53-5fed-b540-3edca16b84fa", 00:17:25.457 "is_configured": true, 00:17:25.457 "data_offset": 256, 00:17:25.457 "data_size": 7936 00:17:25.457 } 00:17:25.457 ] 00:17:25.457 }' 00:17:25.457 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.457 10:28:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.717 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:25.717 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:25.717 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.717 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.717 [2024-11-19 10:28:39.377029] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:25.717 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.717 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:25.717 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.717 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.717 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.717 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:25.717 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.717 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:25.717 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:25.717 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:25.717 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:25.717 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:25.717 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:25.717 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:25.717 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:25.717 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:25.717 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:25.717 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:25.717 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:25.717 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:25.717 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:25.976 [2024-11-19 10:28:39.628425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:25.976 /dev/nbd0 00:17:25.976 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:25.976 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:25.976 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:25.976 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:25.976 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:25.976 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:25.976 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:25.976 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:25.976 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:25.976 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:25.976 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:25.976 1+0 records in 00:17:25.976 1+0 records out 00:17:25.976 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000298812 s, 13.7 MB/s 00:17:25.976 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.976 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:25.976 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.976 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:25.976 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:25.976 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:25.976 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:25.976 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:25.976 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:25.976 10:28:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:26.915 7936+0 records in 00:17:26.915 7936+0 records out 00:17:26.915 32505856 bytes (33 MB, 31 MiB) copied, 0.628216 s, 51.7 MB/s 00:17:26.915 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:26.915 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:26.915 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:26.915 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:26.915 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:26.915 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:26.915 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:26.915 [2024-11-19 10:28:40.549975] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.915 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:26.915 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:26.915 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:26.915 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:26.915 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:26.915 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:26.915 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:26.915 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:26.915 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:26.915 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.915 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.915 [2024-11-19 10:28:40.582012] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:26.915 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.915 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:26.915 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.915 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.915 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.915 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.915 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:26.915 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.915 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.916 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.916 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.916 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.916 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.916 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.916 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.916 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.916 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.916 "name": "raid_bdev1", 00:17:26.916 "uuid": "3cc1a5c6-1e46-4192-9882-58b4cf871c0b", 00:17:26.916 "strip_size_kb": 0, 00:17:26.916 "state": "online", 00:17:26.916 "raid_level": "raid1", 00:17:26.916 "superblock": true, 00:17:26.916 "num_base_bdevs": 2, 00:17:26.916 "num_base_bdevs_discovered": 1, 00:17:26.916 "num_base_bdevs_operational": 1, 00:17:26.916 "base_bdevs_list": [ 00:17:26.916 { 00:17:26.916 "name": null, 00:17:26.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.916 "is_configured": false, 00:17:26.916 "data_offset": 0, 00:17:26.916 "data_size": 7936 00:17:26.916 }, 00:17:26.916 { 00:17:26.916 "name": "BaseBdev2", 00:17:26.916 "uuid": "2a9bf825-1a53-5fed-b540-3edca16b84fa", 00:17:26.916 "is_configured": true, 00:17:26.916 "data_offset": 256, 00:17:26.916 "data_size": 7936 00:17:26.916 } 00:17:26.916 ] 00:17:26.916 }' 00:17:26.916 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.916 10:28:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.486 10:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:27.486 10:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.486 10:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.486 [2024-11-19 10:28:41.017249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:27.486 [2024-11-19 10:28:41.031280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:27.486 10:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.486 10:28:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:27.486 [2024-11-19 10:28:41.033079] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:28.426 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.426 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.426 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.426 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.426 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.426 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.426 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.426 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.426 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.426 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.426 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.426 "name": "raid_bdev1", 00:17:28.426 "uuid": "3cc1a5c6-1e46-4192-9882-58b4cf871c0b", 00:17:28.426 "strip_size_kb": 0, 00:17:28.426 "state": "online", 00:17:28.426 "raid_level": "raid1", 00:17:28.426 "superblock": true, 00:17:28.426 "num_base_bdevs": 2, 00:17:28.426 "num_base_bdevs_discovered": 2, 00:17:28.426 "num_base_bdevs_operational": 2, 00:17:28.426 "process": { 00:17:28.426 "type": "rebuild", 00:17:28.426 "target": "spare", 00:17:28.426 "progress": { 00:17:28.426 "blocks": 2560, 00:17:28.426 "percent": 32 00:17:28.426 } 00:17:28.426 }, 00:17:28.426 "base_bdevs_list": [ 00:17:28.426 { 00:17:28.426 "name": "spare", 00:17:28.426 "uuid": "5a0a5114-640e-5c00-9f1d-a1ef36358816", 00:17:28.426 "is_configured": true, 00:17:28.426 "data_offset": 256, 00:17:28.426 "data_size": 7936 00:17:28.426 }, 00:17:28.427 { 00:17:28.427 "name": "BaseBdev2", 00:17:28.427 "uuid": "2a9bf825-1a53-5fed-b540-3edca16b84fa", 00:17:28.427 "is_configured": true, 00:17:28.427 "data_offset": 256, 00:17:28.427 "data_size": 7936 00:17:28.427 } 00:17:28.427 ] 00:17:28.427 }' 00:17:28.427 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.427 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.427 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.427 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.427 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:28.427 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.427 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.427 [2024-11-19 10:28:42.188707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:28.687 [2024-11-19 10:28:42.237678] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:28.687 [2024-11-19 10:28:42.237733] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.687 [2024-11-19 10:28:42.237747] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:28.687 [2024-11-19 10:28:42.237755] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:28.687 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.687 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:28.687 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.687 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.687 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.687 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.687 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:28.687 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.687 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.687 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.687 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.687 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.687 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.687 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.687 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.687 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.687 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.687 "name": "raid_bdev1", 00:17:28.687 "uuid": "3cc1a5c6-1e46-4192-9882-58b4cf871c0b", 00:17:28.687 "strip_size_kb": 0, 00:17:28.687 "state": "online", 00:17:28.687 "raid_level": "raid1", 00:17:28.687 "superblock": true, 00:17:28.687 "num_base_bdevs": 2, 00:17:28.687 "num_base_bdevs_discovered": 1, 00:17:28.687 "num_base_bdevs_operational": 1, 00:17:28.687 "base_bdevs_list": [ 00:17:28.687 { 00:17:28.687 "name": null, 00:17:28.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.687 "is_configured": false, 00:17:28.687 "data_offset": 0, 00:17:28.687 "data_size": 7936 00:17:28.687 }, 00:17:28.687 { 00:17:28.687 "name": "BaseBdev2", 00:17:28.687 "uuid": "2a9bf825-1a53-5fed-b540-3edca16b84fa", 00:17:28.687 "is_configured": true, 00:17:28.687 "data_offset": 256, 00:17:28.687 "data_size": 7936 00:17:28.687 } 00:17:28.687 ] 00:17:28.687 }' 00:17:28.687 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.687 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.948 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:28.948 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.948 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:29.208 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:29.208 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:29.208 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.208 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.208 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.208 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.208 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.208 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:29.208 "name": "raid_bdev1", 00:17:29.208 "uuid": "3cc1a5c6-1e46-4192-9882-58b4cf871c0b", 00:17:29.208 "strip_size_kb": 0, 00:17:29.208 "state": "online", 00:17:29.208 "raid_level": "raid1", 00:17:29.208 "superblock": true, 00:17:29.208 "num_base_bdevs": 2, 00:17:29.208 "num_base_bdevs_discovered": 1, 00:17:29.208 "num_base_bdevs_operational": 1, 00:17:29.208 "base_bdevs_list": [ 00:17:29.208 { 00:17:29.208 "name": null, 00:17:29.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.208 "is_configured": false, 00:17:29.208 "data_offset": 0, 00:17:29.208 "data_size": 7936 00:17:29.208 }, 00:17:29.208 { 00:17:29.208 "name": "BaseBdev2", 00:17:29.208 "uuid": "2a9bf825-1a53-5fed-b540-3edca16b84fa", 00:17:29.208 "is_configured": true, 00:17:29.208 "data_offset": 256, 00:17:29.208 "data_size": 7936 00:17:29.208 } 00:17:29.208 ] 00:17:29.208 }' 00:17:29.208 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:29.208 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:29.208 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:29.208 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:29.208 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:29.208 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.208 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.208 [2024-11-19 10:28:42.867926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:29.208 [2024-11-19 10:28:42.881104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:29.208 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.208 10:28:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:29.208 [2024-11-19 10:28:42.882909] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:30.148 10:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.148 10:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.148 10:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.148 10:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.148 10:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.148 10:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.148 10:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.148 10:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.149 10:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.149 10:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.408 10:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.408 "name": "raid_bdev1", 00:17:30.408 "uuid": "3cc1a5c6-1e46-4192-9882-58b4cf871c0b", 00:17:30.408 "strip_size_kb": 0, 00:17:30.408 "state": "online", 00:17:30.408 "raid_level": "raid1", 00:17:30.408 "superblock": true, 00:17:30.408 "num_base_bdevs": 2, 00:17:30.408 "num_base_bdevs_discovered": 2, 00:17:30.408 "num_base_bdevs_operational": 2, 00:17:30.408 "process": { 00:17:30.408 "type": "rebuild", 00:17:30.409 "target": "spare", 00:17:30.409 "progress": { 00:17:30.409 "blocks": 2560, 00:17:30.409 "percent": 32 00:17:30.409 } 00:17:30.409 }, 00:17:30.409 "base_bdevs_list": [ 00:17:30.409 { 00:17:30.409 "name": "spare", 00:17:30.409 "uuid": "5a0a5114-640e-5c00-9f1d-a1ef36358816", 00:17:30.409 "is_configured": true, 00:17:30.409 "data_offset": 256, 00:17:30.409 "data_size": 7936 00:17:30.409 }, 00:17:30.409 { 00:17:30.409 "name": "BaseBdev2", 00:17:30.409 "uuid": "2a9bf825-1a53-5fed-b540-3edca16b84fa", 00:17:30.409 "is_configured": true, 00:17:30.409 "data_offset": 256, 00:17:30.409 "data_size": 7936 00:17:30.409 } 00:17:30.409 ] 00:17:30.409 }' 00:17:30.409 10:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.409 10:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:30.409 10:28:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.409 10:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:30.409 10:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:30.409 10:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:30.409 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:30.409 10:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:30.409 10:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:30.409 10:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:30.409 10:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=689 00:17:30.409 10:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:30.409 10:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.409 10:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.409 10:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.409 10:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.409 10:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.409 10:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.409 10:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.409 10:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.409 10:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.409 10:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.409 10:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.409 "name": "raid_bdev1", 00:17:30.409 "uuid": "3cc1a5c6-1e46-4192-9882-58b4cf871c0b", 00:17:30.409 "strip_size_kb": 0, 00:17:30.409 "state": "online", 00:17:30.409 "raid_level": "raid1", 00:17:30.409 "superblock": true, 00:17:30.409 "num_base_bdevs": 2, 00:17:30.409 "num_base_bdevs_discovered": 2, 00:17:30.409 "num_base_bdevs_operational": 2, 00:17:30.409 "process": { 00:17:30.409 "type": "rebuild", 00:17:30.409 "target": "spare", 00:17:30.409 "progress": { 00:17:30.409 "blocks": 2816, 00:17:30.409 "percent": 35 00:17:30.409 } 00:17:30.409 }, 00:17:30.409 "base_bdevs_list": [ 00:17:30.409 { 00:17:30.409 "name": "spare", 00:17:30.409 "uuid": "5a0a5114-640e-5c00-9f1d-a1ef36358816", 00:17:30.409 "is_configured": true, 00:17:30.409 "data_offset": 256, 00:17:30.409 "data_size": 7936 00:17:30.409 }, 00:17:30.409 { 00:17:30.409 "name": "BaseBdev2", 00:17:30.409 "uuid": "2a9bf825-1a53-5fed-b540-3edca16b84fa", 00:17:30.409 "is_configured": true, 00:17:30.409 "data_offset": 256, 00:17:30.409 "data_size": 7936 00:17:30.409 } 00:17:30.409 ] 00:17:30.409 }' 00:17:30.409 10:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.409 10:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:30.409 10:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.409 10:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:30.409 10:28:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:31.791 10:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:31.791 10:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:31.791 10:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.791 10:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:31.791 10:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:31.791 10:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.791 10:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.791 10:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.791 10:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.791 10:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.791 10:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.791 10:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.791 "name": "raid_bdev1", 00:17:31.791 "uuid": "3cc1a5c6-1e46-4192-9882-58b4cf871c0b", 00:17:31.791 "strip_size_kb": 0, 00:17:31.791 "state": "online", 00:17:31.791 "raid_level": "raid1", 00:17:31.791 "superblock": true, 00:17:31.791 "num_base_bdevs": 2, 00:17:31.791 "num_base_bdevs_discovered": 2, 00:17:31.791 "num_base_bdevs_operational": 2, 00:17:31.791 "process": { 00:17:31.791 "type": "rebuild", 00:17:31.791 "target": "spare", 00:17:31.791 "progress": { 00:17:31.791 "blocks": 5888, 00:17:31.791 "percent": 74 00:17:31.791 } 00:17:31.791 }, 00:17:31.791 "base_bdevs_list": [ 00:17:31.791 { 00:17:31.791 "name": "spare", 00:17:31.791 "uuid": "5a0a5114-640e-5c00-9f1d-a1ef36358816", 00:17:31.791 "is_configured": true, 00:17:31.791 "data_offset": 256, 00:17:31.791 "data_size": 7936 00:17:31.791 }, 00:17:31.791 { 00:17:31.791 "name": "BaseBdev2", 00:17:31.791 "uuid": "2a9bf825-1a53-5fed-b540-3edca16b84fa", 00:17:31.791 "is_configured": true, 00:17:31.791 "data_offset": 256, 00:17:31.791 "data_size": 7936 00:17:31.791 } 00:17:31.791 ] 00:17:31.791 }' 00:17:31.791 10:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.791 10:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:31.791 10:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.791 10:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:31.791 10:28:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:32.360 [2024-11-19 10:28:45.994341] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:32.360 [2024-11-19 10:28:45.994406] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:32.360 [2024-11-19 10:28:45.994494] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.621 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:32.621 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:32.621 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.621 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:32.621 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:32.621 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.621 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.621 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.621 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.621 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.621 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.881 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.881 "name": "raid_bdev1", 00:17:32.881 "uuid": "3cc1a5c6-1e46-4192-9882-58b4cf871c0b", 00:17:32.881 "strip_size_kb": 0, 00:17:32.881 "state": "online", 00:17:32.881 "raid_level": "raid1", 00:17:32.881 "superblock": true, 00:17:32.881 "num_base_bdevs": 2, 00:17:32.881 "num_base_bdevs_discovered": 2, 00:17:32.881 "num_base_bdevs_operational": 2, 00:17:32.881 "base_bdevs_list": [ 00:17:32.881 { 00:17:32.881 "name": "spare", 00:17:32.881 "uuid": "5a0a5114-640e-5c00-9f1d-a1ef36358816", 00:17:32.881 "is_configured": true, 00:17:32.881 "data_offset": 256, 00:17:32.881 "data_size": 7936 00:17:32.881 }, 00:17:32.881 { 00:17:32.881 "name": "BaseBdev2", 00:17:32.881 "uuid": "2a9bf825-1a53-5fed-b540-3edca16b84fa", 00:17:32.881 "is_configured": true, 00:17:32.881 "data_offset": 256, 00:17:32.881 "data_size": 7936 00:17:32.881 } 00:17:32.881 ] 00:17:32.881 }' 00:17:32.881 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.881 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:32.881 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.881 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:32.881 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:17:32.881 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:32.881 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.881 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:32.881 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:32.881 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.881 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.881 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.881 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.881 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:32.881 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.881 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.881 "name": "raid_bdev1", 00:17:32.881 "uuid": "3cc1a5c6-1e46-4192-9882-58b4cf871c0b", 00:17:32.881 "strip_size_kb": 0, 00:17:32.881 "state": "online", 00:17:32.881 "raid_level": "raid1", 00:17:32.881 "superblock": true, 00:17:32.881 "num_base_bdevs": 2, 00:17:32.881 "num_base_bdevs_discovered": 2, 00:17:32.881 "num_base_bdevs_operational": 2, 00:17:32.881 "base_bdevs_list": [ 00:17:32.881 { 00:17:32.881 "name": "spare", 00:17:32.881 "uuid": "5a0a5114-640e-5c00-9f1d-a1ef36358816", 00:17:32.881 "is_configured": true, 00:17:32.881 "data_offset": 256, 00:17:32.881 "data_size": 7936 00:17:32.881 }, 00:17:32.881 { 00:17:32.881 "name": "BaseBdev2", 00:17:32.881 "uuid": "2a9bf825-1a53-5fed-b540-3edca16b84fa", 00:17:32.881 "is_configured": true, 00:17:32.881 "data_offset": 256, 00:17:32.881 "data_size": 7936 00:17:32.881 } 00:17:32.881 ] 00:17:32.881 }' 00:17:32.881 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.881 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:32.881 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.881 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:32.881 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:32.881 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.881 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.881 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:32.881 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:32.881 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:33.141 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.141 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.141 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.141 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.141 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.141 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.141 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.141 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.141 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.141 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.141 "name": "raid_bdev1", 00:17:33.141 "uuid": "3cc1a5c6-1e46-4192-9882-58b4cf871c0b", 00:17:33.141 "strip_size_kb": 0, 00:17:33.141 "state": "online", 00:17:33.141 "raid_level": "raid1", 00:17:33.141 "superblock": true, 00:17:33.141 "num_base_bdevs": 2, 00:17:33.141 "num_base_bdevs_discovered": 2, 00:17:33.141 "num_base_bdevs_operational": 2, 00:17:33.141 "base_bdevs_list": [ 00:17:33.141 { 00:17:33.141 "name": "spare", 00:17:33.141 "uuid": "5a0a5114-640e-5c00-9f1d-a1ef36358816", 00:17:33.141 "is_configured": true, 00:17:33.141 "data_offset": 256, 00:17:33.141 "data_size": 7936 00:17:33.141 }, 00:17:33.141 { 00:17:33.141 "name": "BaseBdev2", 00:17:33.141 "uuid": "2a9bf825-1a53-5fed-b540-3edca16b84fa", 00:17:33.141 "is_configured": true, 00:17:33.141 "data_offset": 256, 00:17:33.141 "data_size": 7936 00:17:33.141 } 00:17:33.141 ] 00:17:33.141 }' 00:17:33.141 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.141 10:28:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.402 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:33.402 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.402 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.402 [2024-11-19 10:28:47.111272] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:33.402 [2024-11-19 10:28:47.111358] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:33.402 [2024-11-19 10:28:47.111457] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.402 [2024-11-19 10:28:47.111520] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.402 [2024-11-19 10:28:47.111530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:33.402 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.402 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.402 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.402 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:17:33.402 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.402 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.402 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:33.402 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:33.402 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:33.402 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:33.402 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:33.402 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:33.402 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:33.402 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:33.402 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:33.402 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:33.402 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:33.402 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:33.402 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:33.662 /dev/nbd0 00:17:33.662 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:33.662 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:33.662 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:33.662 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:33.662 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:33.662 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:33.662 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:33.662 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:33.662 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:33.662 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:33.662 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:33.662 1+0 records in 00:17:33.662 1+0 records out 00:17:33.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436227 s, 9.4 MB/s 00:17:33.922 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:33.922 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:33.922 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:33.922 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:33.922 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:33.922 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:33.922 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:33.922 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:33.922 /dev/nbd1 00:17:33.922 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:33.922 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:33.922 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:33.922 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:33.922 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:33.922 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:33.922 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:33.922 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:33.922 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:33.922 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:33.922 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:33.922 1+0 records in 00:17:33.922 1+0 records out 00:17:33.922 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034659 s, 11.8 MB/s 00:17:33.922 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:34.181 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:34.181 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:34.181 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:34.181 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:34.181 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:34.181 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:34.181 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:34.181 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:34.181 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:34.181 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:34.181 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:34.181 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:34.181 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:34.181 10:28:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:34.441 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:34.441 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:34.441 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:34.441 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:34.441 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:34.441 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:34.441 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:34.441 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:34.441 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:34.441 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:34.701 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:34.701 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:34.701 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:34.701 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:34.701 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:34.701 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:34.701 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:34.701 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:34.701 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:34.701 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:34.701 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.702 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.702 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.702 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:34.702 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.702 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.702 [2024-11-19 10:28:48.314045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:34.702 [2024-11-19 10:28:48.314093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:34.702 [2024-11-19 10:28:48.314114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:34.702 [2024-11-19 10:28:48.314122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:34.702 [2024-11-19 10:28:48.316032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:34.702 [2024-11-19 10:28:48.316070] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:34.702 [2024-11-19 10:28:48.316121] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:34.702 [2024-11-19 10:28:48.316174] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:34.702 [2024-11-19 10:28:48.316280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:34.702 spare 00:17:34.702 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.702 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:34.702 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.702 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.702 [2024-11-19 10:28:48.416152] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:34.702 [2024-11-19 10:28:48.416177] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:34.702 [2024-11-19 10:28:48.416263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:34.702 [2024-11-19 10:28:48.416386] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:34.702 [2024-11-19 10:28:48.416394] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:34.702 [2024-11-19 10:28:48.416505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:34.702 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.702 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:34.702 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:34.702 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:34.702 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:34.702 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:34.702 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:34.702 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:34.702 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:34.702 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:34.702 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:34.702 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.702 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.702 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.702 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.702 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.702 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:34.702 "name": "raid_bdev1", 00:17:34.702 "uuid": "3cc1a5c6-1e46-4192-9882-58b4cf871c0b", 00:17:34.702 "strip_size_kb": 0, 00:17:34.702 "state": "online", 00:17:34.702 "raid_level": "raid1", 00:17:34.702 "superblock": true, 00:17:34.702 "num_base_bdevs": 2, 00:17:34.702 "num_base_bdevs_discovered": 2, 00:17:34.702 "num_base_bdevs_operational": 2, 00:17:34.702 "base_bdevs_list": [ 00:17:34.702 { 00:17:34.702 "name": "spare", 00:17:34.702 "uuid": "5a0a5114-640e-5c00-9f1d-a1ef36358816", 00:17:34.702 "is_configured": true, 00:17:34.702 "data_offset": 256, 00:17:34.702 "data_size": 7936 00:17:34.702 }, 00:17:34.702 { 00:17:34.702 "name": "BaseBdev2", 00:17:34.702 "uuid": "2a9bf825-1a53-5fed-b540-3edca16b84fa", 00:17:34.702 "is_configured": true, 00:17:34.702 "data_offset": 256, 00:17:34.702 "data_size": 7936 00:17:34.702 } 00:17:34.702 ] 00:17:34.702 }' 00:17:34.702 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:34.702 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.273 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:35.273 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.273 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:35.273 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:35.273 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.273 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.273 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.273 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.273 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.273 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.273 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.273 "name": "raid_bdev1", 00:17:35.273 "uuid": "3cc1a5c6-1e46-4192-9882-58b4cf871c0b", 00:17:35.273 "strip_size_kb": 0, 00:17:35.273 "state": "online", 00:17:35.273 "raid_level": "raid1", 00:17:35.273 "superblock": true, 00:17:35.273 "num_base_bdevs": 2, 00:17:35.273 "num_base_bdevs_discovered": 2, 00:17:35.273 "num_base_bdevs_operational": 2, 00:17:35.273 "base_bdevs_list": [ 00:17:35.273 { 00:17:35.273 "name": "spare", 00:17:35.273 "uuid": "5a0a5114-640e-5c00-9f1d-a1ef36358816", 00:17:35.273 "is_configured": true, 00:17:35.273 "data_offset": 256, 00:17:35.273 "data_size": 7936 00:17:35.273 }, 00:17:35.273 { 00:17:35.273 "name": "BaseBdev2", 00:17:35.273 "uuid": "2a9bf825-1a53-5fed-b540-3edca16b84fa", 00:17:35.273 "is_configured": true, 00:17:35.273 "data_offset": 256, 00:17:35.273 "data_size": 7936 00:17:35.273 } 00:17:35.273 ] 00:17:35.273 }' 00:17:35.273 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.273 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:35.273 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.273 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:35.273 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.274 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.274 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:35.274 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.274 10:28:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.274 10:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:35.274 10:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:35.274 10:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.274 10:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.274 [2024-11-19 10:28:49.028817] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:35.274 10:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.274 10:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:35.274 10:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:35.274 10:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:35.274 10:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:35.274 10:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:35.274 10:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:35.274 10:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:35.274 10:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:35.274 10:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:35.274 10:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:35.274 10:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.274 10:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.274 10:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.274 10:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.559 10:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.559 10:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:35.559 "name": "raid_bdev1", 00:17:35.559 "uuid": "3cc1a5c6-1e46-4192-9882-58b4cf871c0b", 00:17:35.559 "strip_size_kb": 0, 00:17:35.559 "state": "online", 00:17:35.559 "raid_level": "raid1", 00:17:35.559 "superblock": true, 00:17:35.559 "num_base_bdevs": 2, 00:17:35.559 "num_base_bdevs_discovered": 1, 00:17:35.559 "num_base_bdevs_operational": 1, 00:17:35.559 "base_bdevs_list": [ 00:17:35.559 { 00:17:35.559 "name": null, 00:17:35.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.559 "is_configured": false, 00:17:35.559 "data_offset": 0, 00:17:35.559 "data_size": 7936 00:17:35.559 }, 00:17:35.559 { 00:17:35.559 "name": "BaseBdev2", 00:17:35.559 "uuid": "2a9bf825-1a53-5fed-b540-3edca16b84fa", 00:17:35.559 "is_configured": true, 00:17:35.559 "data_offset": 256, 00:17:35.559 "data_size": 7936 00:17:35.559 } 00:17:35.559 ] 00:17:35.559 }' 00:17:35.559 10:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:35.559 10:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.828 10:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:35.828 10:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.828 10:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:35.828 [2024-11-19 10:28:49.496099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:35.829 [2024-11-19 10:28:49.496280] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:35.829 [2024-11-19 10:28:49.496344] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:35.829 [2024-11-19 10:28:49.496401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:35.829 [2024-11-19 10:28:49.509445] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:35.829 10:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.829 10:28:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:35.829 [2024-11-19 10:28:49.511205] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:36.769 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.769 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.769 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.769 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.769 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.769 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.769 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.769 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.769 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:36.769 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.029 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.029 "name": "raid_bdev1", 00:17:37.029 "uuid": "3cc1a5c6-1e46-4192-9882-58b4cf871c0b", 00:17:37.029 "strip_size_kb": 0, 00:17:37.029 "state": "online", 00:17:37.029 "raid_level": "raid1", 00:17:37.029 "superblock": true, 00:17:37.029 "num_base_bdevs": 2, 00:17:37.029 "num_base_bdevs_discovered": 2, 00:17:37.029 "num_base_bdevs_operational": 2, 00:17:37.029 "process": { 00:17:37.029 "type": "rebuild", 00:17:37.029 "target": "spare", 00:17:37.029 "progress": { 00:17:37.029 "blocks": 2560, 00:17:37.029 "percent": 32 00:17:37.029 } 00:17:37.029 }, 00:17:37.029 "base_bdevs_list": [ 00:17:37.029 { 00:17:37.029 "name": "spare", 00:17:37.029 "uuid": "5a0a5114-640e-5c00-9f1d-a1ef36358816", 00:17:37.029 "is_configured": true, 00:17:37.029 "data_offset": 256, 00:17:37.029 "data_size": 7936 00:17:37.029 }, 00:17:37.029 { 00:17:37.029 "name": "BaseBdev2", 00:17:37.029 "uuid": "2a9bf825-1a53-5fed-b540-3edca16b84fa", 00:17:37.029 "is_configured": true, 00:17:37.029 "data_offset": 256, 00:17:37.029 "data_size": 7936 00:17:37.029 } 00:17:37.029 ] 00:17:37.029 }' 00:17:37.029 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.029 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:37.029 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.029 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:37.029 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:37.029 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.029 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.029 [2024-11-19 10:28:50.671860] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:37.029 [2024-11-19 10:28:50.715700] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:37.029 [2024-11-19 10:28:50.715751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.029 [2024-11-19 10:28:50.715765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:37.029 [2024-11-19 10:28:50.715783] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:37.029 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.029 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:37.029 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.029 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.029 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.029 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.029 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:37.029 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.029 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.029 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.029 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.029 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.029 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.029 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.029 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.029 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.029 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.029 "name": "raid_bdev1", 00:17:37.029 "uuid": "3cc1a5c6-1e46-4192-9882-58b4cf871c0b", 00:17:37.029 "strip_size_kb": 0, 00:17:37.029 "state": "online", 00:17:37.029 "raid_level": "raid1", 00:17:37.029 "superblock": true, 00:17:37.029 "num_base_bdevs": 2, 00:17:37.029 "num_base_bdevs_discovered": 1, 00:17:37.029 "num_base_bdevs_operational": 1, 00:17:37.029 "base_bdevs_list": [ 00:17:37.029 { 00:17:37.029 "name": null, 00:17:37.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.029 "is_configured": false, 00:17:37.029 "data_offset": 0, 00:17:37.029 "data_size": 7936 00:17:37.029 }, 00:17:37.029 { 00:17:37.029 "name": "BaseBdev2", 00:17:37.029 "uuid": "2a9bf825-1a53-5fed-b540-3edca16b84fa", 00:17:37.029 "is_configured": true, 00:17:37.029 "data_offset": 256, 00:17:37.029 "data_size": 7936 00:17:37.029 } 00:17:37.029 ] 00:17:37.029 }' 00:17:37.029 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.029 10:28:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.600 10:28:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:37.600 10:28:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.600 10:28:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.600 [2024-11-19 10:28:51.154156] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:37.600 [2024-11-19 10:28:51.154254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.600 [2024-11-19 10:28:51.154293] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:37.600 [2024-11-19 10:28:51.154327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.600 [2024-11-19 10:28:51.154571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.600 [2024-11-19 10:28:51.154624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:37.600 [2024-11-19 10:28:51.154697] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:37.600 [2024-11-19 10:28:51.154736] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:37.600 [2024-11-19 10:28:51.154776] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:37.600 [2024-11-19 10:28:51.154819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:37.600 [2024-11-19 10:28:51.168053] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:37.600 [2024-11-19 10:28:51.169801] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:37.600 spare 00:17:37.600 10:28:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.600 10:28:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:38.540 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:38.540 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.540 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:38.540 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:38.540 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.540 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.540 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.540 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.540 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.540 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.540 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.540 "name": "raid_bdev1", 00:17:38.540 "uuid": "3cc1a5c6-1e46-4192-9882-58b4cf871c0b", 00:17:38.540 "strip_size_kb": 0, 00:17:38.540 "state": "online", 00:17:38.540 "raid_level": "raid1", 00:17:38.540 "superblock": true, 00:17:38.540 "num_base_bdevs": 2, 00:17:38.540 "num_base_bdevs_discovered": 2, 00:17:38.540 "num_base_bdevs_operational": 2, 00:17:38.540 "process": { 00:17:38.540 "type": "rebuild", 00:17:38.540 "target": "spare", 00:17:38.540 "progress": { 00:17:38.540 "blocks": 2560, 00:17:38.540 "percent": 32 00:17:38.540 } 00:17:38.540 }, 00:17:38.540 "base_bdevs_list": [ 00:17:38.540 { 00:17:38.540 "name": "spare", 00:17:38.540 "uuid": "5a0a5114-640e-5c00-9f1d-a1ef36358816", 00:17:38.540 "is_configured": true, 00:17:38.540 "data_offset": 256, 00:17:38.540 "data_size": 7936 00:17:38.540 }, 00:17:38.540 { 00:17:38.540 "name": "BaseBdev2", 00:17:38.540 "uuid": "2a9bf825-1a53-5fed-b540-3edca16b84fa", 00:17:38.540 "is_configured": true, 00:17:38.540 "data_offset": 256, 00:17:38.540 "data_size": 7936 00:17:38.540 } 00:17:38.540 ] 00:17:38.540 }' 00:17:38.540 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.541 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:38.541 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.800 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:38.800 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:38.800 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.801 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.801 [2024-11-19 10:28:52.333866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:38.801 [2024-11-19 10:28:52.374279] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:38.801 [2024-11-19 10:28:52.374330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.801 [2024-11-19 10:28:52.374346] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:38.801 [2024-11-19 10:28:52.374352] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:38.801 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.801 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:38.801 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.801 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.801 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:38.801 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:38.801 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:38.801 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.801 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.801 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.801 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.801 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.801 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.801 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.801 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:38.801 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.801 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.801 "name": "raid_bdev1", 00:17:38.801 "uuid": "3cc1a5c6-1e46-4192-9882-58b4cf871c0b", 00:17:38.801 "strip_size_kb": 0, 00:17:38.801 "state": "online", 00:17:38.801 "raid_level": "raid1", 00:17:38.801 "superblock": true, 00:17:38.801 "num_base_bdevs": 2, 00:17:38.801 "num_base_bdevs_discovered": 1, 00:17:38.801 "num_base_bdevs_operational": 1, 00:17:38.801 "base_bdevs_list": [ 00:17:38.801 { 00:17:38.801 "name": null, 00:17:38.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.801 "is_configured": false, 00:17:38.801 "data_offset": 0, 00:17:38.801 "data_size": 7936 00:17:38.801 }, 00:17:38.801 { 00:17:38.801 "name": "BaseBdev2", 00:17:38.801 "uuid": "2a9bf825-1a53-5fed-b540-3edca16b84fa", 00:17:38.801 "is_configured": true, 00:17:38.801 "data_offset": 256, 00:17:38.801 "data_size": 7936 00:17:38.801 } 00:17:38.801 ] 00:17:38.801 }' 00:17:38.801 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.801 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.370 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:39.370 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.370 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:39.370 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:39.370 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.370 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.370 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.370 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.370 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.370 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.370 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.370 "name": "raid_bdev1", 00:17:39.370 "uuid": "3cc1a5c6-1e46-4192-9882-58b4cf871c0b", 00:17:39.370 "strip_size_kb": 0, 00:17:39.370 "state": "online", 00:17:39.370 "raid_level": "raid1", 00:17:39.370 "superblock": true, 00:17:39.370 "num_base_bdevs": 2, 00:17:39.370 "num_base_bdevs_discovered": 1, 00:17:39.370 "num_base_bdevs_operational": 1, 00:17:39.370 "base_bdevs_list": [ 00:17:39.370 { 00:17:39.370 "name": null, 00:17:39.370 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.370 "is_configured": false, 00:17:39.370 "data_offset": 0, 00:17:39.370 "data_size": 7936 00:17:39.370 }, 00:17:39.370 { 00:17:39.370 "name": "BaseBdev2", 00:17:39.370 "uuid": "2a9bf825-1a53-5fed-b540-3edca16b84fa", 00:17:39.370 "is_configured": true, 00:17:39.370 "data_offset": 256, 00:17:39.370 "data_size": 7936 00:17:39.370 } 00:17:39.370 ] 00:17:39.370 }' 00:17:39.370 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.370 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:39.370 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.370 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:39.370 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:39.370 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.370 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.370 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.370 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:39.370 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.370 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:39.370 [2024-11-19 10:28:52.996126] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:39.370 [2024-11-19 10:28:52.996172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.370 [2024-11-19 10:28:52.996194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:39.370 [2024-11-19 10:28:52.996202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.370 [2024-11-19 10:28:52.996386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.370 [2024-11-19 10:28:52.996398] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:39.370 [2024-11-19 10:28:52.996440] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:39.370 [2024-11-19 10:28:52.996452] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:39.370 [2024-11-19 10:28:52.996460] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:39.371 [2024-11-19 10:28:52.996470] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:39.371 BaseBdev1 00:17:39.371 10:28:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.371 10:28:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:40.311 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:40.311 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.311 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.311 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:40.311 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:40.311 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:40.311 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.311 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.311 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.311 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.311 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.311 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.311 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.311 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.311 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.311 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.311 "name": "raid_bdev1", 00:17:40.311 "uuid": "3cc1a5c6-1e46-4192-9882-58b4cf871c0b", 00:17:40.311 "strip_size_kb": 0, 00:17:40.311 "state": "online", 00:17:40.311 "raid_level": "raid1", 00:17:40.311 "superblock": true, 00:17:40.311 "num_base_bdevs": 2, 00:17:40.311 "num_base_bdevs_discovered": 1, 00:17:40.311 "num_base_bdevs_operational": 1, 00:17:40.311 "base_bdevs_list": [ 00:17:40.311 { 00:17:40.311 "name": null, 00:17:40.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.311 "is_configured": false, 00:17:40.311 "data_offset": 0, 00:17:40.311 "data_size": 7936 00:17:40.311 }, 00:17:40.311 { 00:17:40.311 "name": "BaseBdev2", 00:17:40.311 "uuid": "2a9bf825-1a53-5fed-b540-3edca16b84fa", 00:17:40.311 "is_configured": true, 00:17:40.311 "data_offset": 256, 00:17:40.311 "data_size": 7936 00:17:40.311 } 00:17:40.311 ] 00:17:40.311 }' 00:17:40.311 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.311 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.882 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:40.882 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.882 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:40.882 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:40.882 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.882 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.882 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.882 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.882 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.882 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.882 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.882 "name": "raid_bdev1", 00:17:40.882 "uuid": "3cc1a5c6-1e46-4192-9882-58b4cf871c0b", 00:17:40.882 "strip_size_kb": 0, 00:17:40.882 "state": "online", 00:17:40.882 "raid_level": "raid1", 00:17:40.882 "superblock": true, 00:17:40.882 "num_base_bdevs": 2, 00:17:40.882 "num_base_bdevs_discovered": 1, 00:17:40.882 "num_base_bdevs_operational": 1, 00:17:40.882 "base_bdevs_list": [ 00:17:40.882 { 00:17:40.882 "name": null, 00:17:40.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.882 "is_configured": false, 00:17:40.882 "data_offset": 0, 00:17:40.882 "data_size": 7936 00:17:40.882 }, 00:17:40.882 { 00:17:40.882 "name": "BaseBdev2", 00:17:40.882 "uuid": "2a9bf825-1a53-5fed-b540-3edca16b84fa", 00:17:40.882 "is_configured": true, 00:17:40.882 "data_offset": 256, 00:17:40.882 "data_size": 7936 00:17:40.882 } 00:17:40.882 ] 00:17:40.882 }' 00:17:40.882 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.882 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:40.882 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.882 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:40.882 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:40.882 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:40.882 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:40.882 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:40.882 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:40.882 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:40.882 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:40.882 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:40.882 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.882 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:40.882 [2024-11-19 10:28:54.605453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:40.882 [2024-11-19 10:28:54.605641] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:40.882 [2024-11-19 10:28:54.605700] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:40.882 request: 00:17:40.882 { 00:17:40.882 "base_bdev": "BaseBdev1", 00:17:40.882 "raid_bdev": "raid_bdev1", 00:17:40.882 "method": "bdev_raid_add_base_bdev", 00:17:40.882 "req_id": 1 00:17:40.882 } 00:17:40.882 Got JSON-RPC error response 00:17:40.882 response: 00:17:40.882 { 00:17:40.882 "code": -22, 00:17:40.882 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:40.882 } 00:17:40.882 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:40.882 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:40.882 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:40.882 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:40.882 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:40.882 10:28:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:42.264 10:28:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:42.264 10:28:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.264 10:28:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.264 10:28:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:42.264 10:28:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:42.264 10:28:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:42.264 10:28:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.264 10:28:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.264 10:28:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.264 10:28:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.264 10:28:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.264 10:28:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.264 10:28:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.264 10:28:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.264 10:28:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.264 10:28:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.264 "name": "raid_bdev1", 00:17:42.264 "uuid": "3cc1a5c6-1e46-4192-9882-58b4cf871c0b", 00:17:42.264 "strip_size_kb": 0, 00:17:42.264 "state": "online", 00:17:42.264 "raid_level": "raid1", 00:17:42.264 "superblock": true, 00:17:42.264 "num_base_bdevs": 2, 00:17:42.264 "num_base_bdevs_discovered": 1, 00:17:42.264 "num_base_bdevs_operational": 1, 00:17:42.264 "base_bdevs_list": [ 00:17:42.264 { 00:17:42.264 "name": null, 00:17:42.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.264 "is_configured": false, 00:17:42.264 "data_offset": 0, 00:17:42.264 "data_size": 7936 00:17:42.264 }, 00:17:42.264 { 00:17:42.264 "name": "BaseBdev2", 00:17:42.264 "uuid": "2a9bf825-1a53-5fed-b540-3edca16b84fa", 00:17:42.264 "is_configured": true, 00:17:42.264 "data_offset": 256, 00:17:42.264 "data_size": 7936 00:17:42.264 } 00:17:42.264 ] 00:17:42.264 }' 00:17:42.264 10:28:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.264 10:28:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.524 10:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:42.524 10:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:42.524 10:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:42.524 10:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:42.524 10:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:42.524 10:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.524 10:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.524 10:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.524 10:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:42.524 10:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.524 10:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:42.524 "name": "raid_bdev1", 00:17:42.524 "uuid": "3cc1a5c6-1e46-4192-9882-58b4cf871c0b", 00:17:42.524 "strip_size_kb": 0, 00:17:42.524 "state": "online", 00:17:42.524 "raid_level": "raid1", 00:17:42.524 "superblock": true, 00:17:42.524 "num_base_bdevs": 2, 00:17:42.524 "num_base_bdevs_discovered": 1, 00:17:42.524 "num_base_bdevs_operational": 1, 00:17:42.524 "base_bdevs_list": [ 00:17:42.524 { 00:17:42.524 "name": null, 00:17:42.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.524 "is_configured": false, 00:17:42.524 "data_offset": 0, 00:17:42.524 "data_size": 7936 00:17:42.524 }, 00:17:42.524 { 00:17:42.524 "name": "BaseBdev2", 00:17:42.524 "uuid": "2a9bf825-1a53-5fed-b540-3edca16b84fa", 00:17:42.524 "is_configured": true, 00:17:42.524 "data_offset": 256, 00:17:42.524 "data_size": 7936 00:17:42.524 } 00:17:42.524 ] 00:17:42.524 }' 00:17:42.524 10:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:42.524 10:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:42.524 10:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.524 10:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:42.524 10:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87436 00:17:42.524 10:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87436 ']' 00:17:42.524 10:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87436 00:17:42.524 10:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:42.524 10:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:42.524 10:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87436 00:17:42.524 killing process with pid 87436 00:17:42.524 Received shutdown signal, test time was about 60.000000 seconds 00:17:42.524 00:17:42.524 Latency(us) 00:17:42.524 [2024-11-19T10:28:56.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.524 [2024-11-19T10:28:56.305Z] =================================================================================================================== 00:17:42.524 [2024-11-19T10:28:56.305Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:42.524 10:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:42.524 10:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:42.524 10:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87436' 00:17:42.524 10:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87436 00:17:42.524 [2024-11-19 10:28:56.264789] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:42.524 [2024-11-19 10:28:56.264899] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:42.524 [2024-11-19 10:28:56.264941] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:42.524 [2024-11-19 10:28:56.264950] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:42.524 10:28:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87436 00:17:42.784 [2024-11-19 10:28:56.562119] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:44.168 10:28:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:17:44.168 00:17:44.168 real 0m19.781s 00:17:44.168 user 0m25.854s 00:17:44.168 sys 0m2.740s 00:17:44.168 10:28:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:44.168 10:28:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:44.168 ************************************ 00:17:44.168 END TEST raid_rebuild_test_sb_md_separate 00:17:44.168 ************************************ 00:17:44.168 10:28:57 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:17:44.168 10:28:57 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:17:44.168 10:28:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:44.168 10:28:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:44.168 10:28:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:44.168 ************************************ 00:17:44.168 START TEST raid_state_function_test_sb_md_interleaved 00:17:44.168 ************************************ 00:17:44.168 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:44.168 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:44.168 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:44.168 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:44.168 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:44.168 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:44.168 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:44.168 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:44.168 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:44.168 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:44.168 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:44.168 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:44.168 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:44.168 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:44.168 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:44.168 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:44.168 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:44.168 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:44.168 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:44.169 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:44.169 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:44.169 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:44.169 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:44.169 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88123 00:17:44.169 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:44.169 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88123' 00:17:44.169 Process raid pid: 88123 00:17:44.169 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88123 00:17:44.169 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88123 ']' 00:17:44.169 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.169 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:44.169 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.169 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:44.169 10:28:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.169 [2024-11-19 10:28:57.767751] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:17:44.169 [2024-11-19 10:28:57.767963] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.429 [2024-11-19 10:28:57.947796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.429 [2024-11-19 10:28:58.054622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.688 [2024-11-19 10:28:58.260603] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:44.688 [2024-11-19 10:28:58.260711] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:44.948 10:28:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:44.948 10:28:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:44.948 10:28:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:44.948 10:28:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.948 10:28:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.948 [2024-11-19 10:28:58.587631] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:44.948 [2024-11-19 10:28:58.587680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:44.948 [2024-11-19 10:28:58.587689] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:44.948 [2024-11-19 10:28:58.587698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:44.948 10:28:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.948 10:28:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:44.948 10:28:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:44.948 10:28:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:44.948 10:28:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.948 10:28:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.948 10:28:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:44.948 10:28:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.948 10:28:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.948 10:28:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.948 10:28:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.948 10:28:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.948 10:28:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:44.948 10:28:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.948 10:28:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.948 10:28:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.948 10:28:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.948 "name": "Existed_Raid", 00:17:44.948 "uuid": "243fd251-d2d1-4e05-96b7-40248c539929", 00:17:44.948 "strip_size_kb": 0, 00:17:44.948 "state": "configuring", 00:17:44.948 "raid_level": "raid1", 00:17:44.948 "superblock": true, 00:17:44.948 "num_base_bdevs": 2, 00:17:44.948 "num_base_bdevs_discovered": 0, 00:17:44.948 "num_base_bdevs_operational": 2, 00:17:44.948 "base_bdevs_list": [ 00:17:44.948 { 00:17:44.948 "name": "BaseBdev1", 00:17:44.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.948 "is_configured": false, 00:17:44.948 "data_offset": 0, 00:17:44.948 "data_size": 0 00:17:44.948 }, 00:17:44.948 { 00:17:44.948 "name": "BaseBdev2", 00:17:44.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.948 "is_configured": false, 00:17:44.948 "data_offset": 0, 00:17:44.948 "data_size": 0 00:17:44.948 } 00:17:44.948 ] 00:17:44.948 }' 00:17:44.948 10:28:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.948 10:28:58 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:45.519 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:45.519 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.519 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:45.519 [2024-11-19 10:28:59.046845] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:45.519 [2024-11-19 10:28:59.046934] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:45.519 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.519 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:45.519 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.519 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:45.519 [2024-11-19 10:28:59.054835] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:45.519 [2024-11-19 10:28:59.054907] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:45.519 [2024-11-19 10:28:59.054932] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:45.519 [2024-11-19 10:28:59.054955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:45.519 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.519 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:17:45.519 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.519 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:45.519 [2024-11-19 10:28:59.097484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:45.519 BaseBdev1 00:17:45.519 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.519 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:45.519 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:45.519 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:45.519 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:17:45.519 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:45.519 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:45.519 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:45.519 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.519 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:45.519 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.519 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:45.519 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.520 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:45.520 [ 00:17:45.520 { 00:17:45.520 "name": "BaseBdev1", 00:17:45.520 "aliases": [ 00:17:45.520 "ed4629bc-d3cf-433e-b735-d096e41c262f" 00:17:45.520 ], 00:17:45.520 "product_name": "Malloc disk", 00:17:45.520 "block_size": 4128, 00:17:45.520 "num_blocks": 8192, 00:17:45.520 "uuid": "ed4629bc-d3cf-433e-b735-d096e41c262f", 00:17:45.520 "md_size": 32, 00:17:45.520 "md_interleave": true, 00:17:45.520 "dif_type": 0, 00:17:45.520 "assigned_rate_limits": { 00:17:45.520 "rw_ios_per_sec": 0, 00:17:45.520 "rw_mbytes_per_sec": 0, 00:17:45.520 "r_mbytes_per_sec": 0, 00:17:45.520 "w_mbytes_per_sec": 0 00:17:45.520 }, 00:17:45.520 "claimed": true, 00:17:45.520 "claim_type": "exclusive_write", 00:17:45.520 "zoned": false, 00:17:45.520 "supported_io_types": { 00:17:45.520 "read": true, 00:17:45.520 "write": true, 00:17:45.520 "unmap": true, 00:17:45.520 "flush": true, 00:17:45.520 "reset": true, 00:17:45.520 "nvme_admin": false, 00:17:45.520 "nvme_io": false, 00:17:45.520 "nvme_io_md": false, 00:17:45.520 "write_zeroes": true, 00:17:45.520 "zcopy": true, 00:17:45.520 "get_zone_info": false, 00:17:45.520 "zone_management": false, 00:17:45.520 "zone_append": false, 00:17:45.520 "compare": false, 00:17:45.520 "compare_and_write": false, 00:17:45.520 "abort": true, 00:17:45.520 "seek_hole": false, 00:17:45.520 "seek_data": false, 00:17:45.520 "copy": true, 00:17:45.520 "nvme_iov_md": false 00:17:45.520 }, 00:17:45.520 "memory_domains": [ 00:17:45.520 { 00:17:45.520 "dma_device_id": "system", 00:17:45.520 "dma_device_type": 1 00:17:45.520 }, 00:17:45.520 { 00:17:45.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:45.520 "dma_device_type": 2 00:17:45.520 } 00:17:45.520 ], 00:17:45.520 "driver_specific": {} 00:17:45.520 } 00:17:45.520 ] 00:17:45.520 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.520 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:17:45.520 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:45.520 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:45.520 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:45.520 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.520 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.520 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:45.520 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.520 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.520 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.520 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.520 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.520 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:45.520 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.520 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:45.520 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.520 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.520 "name": "Existed_Raid", 00:17:45.520 "uuid": "caabb4fd-b219-4499-996a-91bb25c8fc66", 00:17:45.520 "strip_size_kb": 0, 00:17:45.520 "state": "configuring", 00:17:45.520 "raid_level": "raid1", 00:17:45.520 "superblock": true, 00:17:45.520 "num_base_bdevs": 2, 00:17:45.520 "num_base_bdevs_discovered": 1, 00:17:45.520 "num_base_bdevs_operational": 2, 00:17:45.520 "base_bdevs_list": [ 00:17:45.520 { 00:17:45.520 "name": "BaseBdev1", 00:17:45.520 "uuid": "ed4629bc-d3cf-433e-b735-d096e41c262f", 00:17:45.520 "is_configured": true, 00:17:45.520 "data_offset": 256, 00:17:45.520 "data_size": 7936 00:17:45.520 }, 00:17:45.520 { 00:17:45.520 "name": "BaseBdev2", 00:17:45.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.520 "is_configured": false, 00:17:45.520 "data_offset": 0, 00:17:45.520 "data_size": 0 00:17:45.520 } 00:17:45.520 ] 00:17:45.520 }' 00:17:45.520 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.520 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.090 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:46.090 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.090 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.090 [2024-11-19 10:28:59.616636] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:46.090 [2024-11-19 10:28:59.616714] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:46.090 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.090 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:46.090 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.090 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.090 [2024-11-19 10:28:59.628665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:46.090 [2024-11-19 10:28:59.630315] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:46.090 [2024-11-19 10:28:59.630394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:46.090 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.090 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:46.090 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:46.090 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:46.090 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.090 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:46.090 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.090 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.090 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:46.091 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.091 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.091 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.091 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.091 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.091 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.091 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.091 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.091 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.091 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.091 "name": "Existed_Raid", 00:17:46.091 "uuid": "9ea1bf95-6598-4bb2-acdc-c97360478b4a", 00:17:46.091 "strip_size_kb": 0, 00:17:46.091 "state": "configuring", 00:17:46.091 "raid_level": "raid1", 00:17:46.091 "superblock": true, 00:17:46.091 "num_base_bdevs": 2, 00:17:46.091 "num_base_bdevs_discovered": 1, 00:17:46.091 "num_base_bdevs_operational": 2, 00:17:46.091 "base_bdevs_list": [ 00:17:46.091 { 00:17:46.091 "name": "BaseBdev1", 00:17:46.091 "uuid": "ed4629bc-d3cf-433e-b735-d096e41c262f", 00:17:46.091 "is_configured": true, 00:17:46.091 "data_offset": 256, 00:17:46.091 "data_size": 7936 00:17:46.091 }, 00:17:46.091 { 00:17:46.091 "name": "BaseBdev2", 00:17:46.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.091 "is_configured": false, 00:17:46.091 "data_offset": 0, 00:17:46.091 "data_size": 0 00:17:46.091 } 00:17:46.091 ] 00:17:46.091 }' 00:17:46.091 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.091 10:28:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.351 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:17:46.351 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.351 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.351 [2024-11-19 10:29:00.096631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:46.351 [2024-11-19 10:29:00.096872] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:46.351 [2024-11-19 10:29:00.096907] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:46.351 [2024-11-19 10:29:00.097034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:46.351 [2024-11-19 10:29:00.097138] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:46.351 [2024-11-19 10:29:00.097174] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:46.351 [2024-11-19 10:29:00.097260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.351 BaseBdev2 00:17:46.351 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.351 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:46.351 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:46.351 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:46.351 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:17:46.351 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:46.351 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:46.351 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:46.351 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.351 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.351 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.351 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:46.351 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.351 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.351 [ 00:17:46.351 { 00:17:46.351 "name": "BaseBdev2", 00:17:46.351 "aliases": [ 00:17:46.351 "9e0ffa97-a8a8-485a-8537-0755eba6fc10" 00:17:46.351 ], 00:17:46.351 "product_name": "Malloc disk", 00:17:46.351 "block_size": 4128, 00:17:46.351 "num_blocks": 8192, 00:17:46.351 "uuid": "9e0ffa97-a8a8-485a-8537-0755eba6fc10", 00:17:46.351 "md_size": 32, 00:17:46.351 "md_interleave": true, 00:17:46.351 "dif_type": 0, 00:17:46.351 "assigned_rate_limits": { 00:17:46.351 "rw_ios_per_sec": 0, 00:17:46.351 "rw_mbytes_per_sec": 0, 00:17:46.351 "r_mbytes_per_sec": 0, 00:17:46.351 "w_mbytes_per_sec": 0 00:17:46.351 }, 00:17:46.351 "claimed": true, 00:17:46.351 "claim_type": "exclusive_write", 00:17:46.611 "zoned": false, 00:17:46.611 "supported_io_types": { 00:17:46.611 "read": true, 00:17:46.611 "write": true, 00:17:46.611 "unmap": true, 00:17:46.611 "flush": true, 00:17:46.611 "reset": true, 00:17:46.611 "nvme_admin": false, 00:17:46.611 "nvme_io": false, 00:17:46.611 "nvme_io_md": false, 00:17:46.611 "write_zeroes": true, 00:17:46.611 "zcopy": true, 00:17:46.611 "get_zone_info": false, 00:17:46.611 "zone_management": false, 00:17:46.611 "zone_append": false, 00:17:46.611 "compare": false, 00:17:46.611 "compare_and_write": false, 00:17:46.611 "abort": true, 00:17:46.611 "seek_hole": false, 00:17:46.611 "seek_data": false, 00:17:46.611 "copy": true, 00:17:46.611 "nvme_iov_md": false 00:17:46.611 }, 00:17:46.611 "memory_domains": [ 00:17:46.611 { 00:17:46.611 "dma_device_id": "system", 00:17:46.611 "dma_device_type": 1 00:17:46.611 }, 00:17:46.611 { 00:17:46.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.611 "dma_device_type": 2 00:17:46.611 } 00:17:46.611 ], 00:17:46.611 "driver_specific": {} 00:17:46.611 } 00:17:46.611 ] 00:17:46.611 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.611 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:17:46.611 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:46.611 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:46.611 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:46.611 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:46.611 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.611 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:46.611 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:46.611 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:46.611 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.611 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.611 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.611 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.611 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.611 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:46.611 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.611 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.611 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.611 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.611 "name": "Existed_Raid", 00:17:46.611 "uuid": "9ea1bf95-6598-4bb2-acdc-c97360478b4a", 00:17:46.611 "strip_size_kb": 0, 00:17:46.611 "state": "online", 00:17:46.611 "raid_level": "raid1", 00:17:46.611 "superblock": true, 00:17:46.611 "num_base_bdevs": 2, 00:17:46.611 "num_base_bdevs_discovered": 2, 00:17:46.611 "num_base_bdevs_operational": 2, 00:17:46.611 "base_bdevs_list": [ 00:17:46.611 { 00:17:46.611 "name": "BaseBdev1", 00:17:46.611 "uuid": "ed4629bc-d3cf-433e-b735-d096e41c262f", 00:17:46.611 "is_configured": true, 00:17:46.611 "data_offset": 256, 00:17:46.611 "data_size": 7936 00:17:46.611 }, 00:17:46.611 { 00:17:46.611 "name": "BaseBdev2", 00:17:46.611 "uuid": "9e0ffa97-a8a8-485a-8537-0755eba6fc10", 00:17:46.611 "is_configured": true, 00:17:46.611 "data_offset": 256, 00:17:46.611 "data_size": 7936 00:17:46.611 } 00:17:46.611 ] 00:17:46.612 }' 00:17:46.612 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.612 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.871 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:46.871 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:46.871 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:46.871 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:46.872 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:46.872 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:46.872 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:46.872 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:46.872 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.872 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.872 [2024-11-19 10:29:00.592188] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:46.872 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.872 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:46.872 "name": "Existed_Raid", 00:17:46.872 "aliases": [ 00:17:46.872 "9ea1bf95-6598-4bb2-acdc-c97360478b4a" 00:17:46.872 ], 00:17:46.872 "product_name": "Raid Volume", 00:17:46.872 "block_size": 4128, 00:17:46.872 "num_blocks": 7936, 00:17:46.872 "uuid": "9ea1bf95-6598-4bb2-acdc-c97360478b4a", 00:17:46.872 "md_size": 32, 00:17:46.872 "md_interleave": true, 00:17:46.872 "dif_type": 0, 00:17:46.872 "assigned_rate_limits": { 00:17:46.872 "rw_ios_per_sec": 0, 00:17:46.872 "rw_mbytes_per_sec": 0, 00:17:46.872 "r_mbytes_per_sec": 0, 00:17:46.872 "w_mbytes_per_sec": 0 00:17:46.872 }, 00:17:46.872 "claimed": false, 00:17:46.872 "zoned": false, 00:17:46.872 "supported_io_types": { 00:17:46.872 "read": true, 00:17:46.872 "write": true, 00:17:46.872 "unmap": false, 00:17:46.872 "flush": false, 00:17:46.872 "reset": true, 00:17:46.872 "nvme_admin": false, 00:17:46.872 "nvme_io": false, 00:17:46.872 "nvme_io_md": false, 00:17:46.872 "write_zeroes": true, 00:17:46.872 "zcopy": false, 00:17:46.872 "get_zone_info": false, 00:17:46.872 "zone_management": false, 00:17:46.872 "zone_append": false, 00:17:46.872 "compare": false, 00:17:46.872 "compare_and_write": false, 00:17:46.872 "abort": false, 00:17:46.872 "seek_hole": false, 00:17:46.872 "seek_data": false, 00:17:46.872 "copy": false, 00:17:46.872 "nvme_iov_md": false 00:17:46.872 }, 00:17:46.872 "memory_domains": [ 00:17:46.872 { 00:17:46.872 "dma_device_id": "system", 00:17:46.872 "dma_device_type": 1 00:17:46.872 }, 00:17:46.872 { 00:17:46.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.872 "dma_device_type": 2 00:17:46.872 }, 00:17:46.872 { 00:17:46.872 "dma_device_id": "system", 00:17:46.872 "dma_device_type": 1 00:17:46.872 }, 00:17:46.872 { 00:17:46.872 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.872 "dma_device_type": 2 00:17:46.872 } 00:17:46.872 ], 00:17:46.872 "driver_specific": { 00:17:46.872 "raid": { 00:17:46.872 "uuid": "9ea1bf95-6598-4bb2-acdc-c97360478b4a", 00:17:46.872 "strip_size_kb": 0, 00:17:46.872 "state": "online", 00:17:46.872 "raid_level": "raid1", 00:17:46.872 "superblock": true, 00:17:46.872 "num_base_bdevs": 2, 00:17:46.872 "num_base_bdevs_discovered": 2, 00:17:46.872 "num_base_bdevs_operational": 2, 00:17:46.872 "base_bdevs_list": [ 00:17:46.872 { 00:17:46.872 "name": "BaseBdev1", 00:17:46.872 "uuid": "ed4629bc-d3cf-433e-b735-d096e41c262f", 00:17:46.872 "is_configured": true, 00:17:46.872 "data_offset": 256, 00:17:46.872 "data_size": 7936 00:17:46.872 }, 00:17:46.872 { 00:17:46.872 "name": "BaseBdev2", 00:17:46.872 "uuid": "9e0ffa97-a8a8-485a-8537-0755eba6fc10", 00:17:46.872 "is_configured": true, 00:17:46.872 "data_offset": 256, 00:17:46.872 "data_size": 7936 00:17:46.872 } 00:17:46.872 ] 00:17:46.872 } 00:17:46.872 } 00:17:46.872 }' 00:17:46.872 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:47.132 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:47.132 BaseBdev2' 00:17:47.132 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.132 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:47.132 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:47.132 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.132 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:47.132 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.132 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:47.132 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:47.133 [2024-11-19 10:29:00.795503] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.133 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:47.393 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.393 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.393 "name": "Existed_Raid", 00:17:47.393 "uuid": "9ea1bf95-6598-4bb2-acdc-c97360478b4a", 00:17:47.393 "strip_size_kb": 0, 00:17:47.393 "state": "online", 00:17:47.393 "raid_level": "raid1", 00:17:47.393 "superblock": true, 00:17:47.393 "num_base_bdevs": 2, 00:17:47.393 "num_base_bdevs_discovered": 1, 00:17:47.393 "num_base_bdevs_operational": 1, 00:17:47.393 "base_bdevs_list": [ 00:17:47.393 { 00:17:47.393 "name": null, 00:17:47.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.393 "is_configured": false, 00:17:47.393 "data_offset": 0, 00:17:47.393 "data_size": 7936 00:17:47.393 }, 00:17:47.393 { 00:17:47.393 "name": "BaseBdev2", 00:17:47.393 "uuid": "9e0ffa97-a8a8-485a-8537-0755eba6fc10", 00:17:47.393 "is_configured": true, 00:17:47.393 "data_offset": 256, 00:17:47.393 "data_size": 7936 00:17:47.393 } 00:17:47.393 ] 00:17:47.393 }' 00:17:47.393 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.393 10:29:00 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:47.653 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:47.653 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:47.653 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:47.653 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.653 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.653 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:47.653 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.653 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:47.653 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:47.653 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:47.653 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.653 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:47.653 [2024-11-19 10:29:01.335568] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:47.653 [2024-11-19 10:29:01.335719] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:47.653 [2024-11-19 10:29:01.424556] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:47.653 [2024-11-19 10:29:01.424650] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:47.653 [2024-11-19 10:29:01.424690] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:47.653 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.653 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:47.653 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:47.653 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:47.912 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.912 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.912 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:47.912 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.912 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:47.912 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:47.912 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:47.912 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88123 00:17:47.912 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88123 ']' 00:17:47.912 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88123 00:17:47.912 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:17:47.912 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:47.912 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88123 00:17:47.912 killing process with pid 88123 00:17:47.912 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:47.912 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:47.912 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88123' 00:17:47.912 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88123 00:17:47.912 [2024-11-19 10:29:01.525489] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:47.912 10:29:01 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88123 00:17:47.912 [2024-11-19 10:29:01.540946] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:48.851 10:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:17:48.851 00:17:48.851 real 0m4.912s 00:17:48.851 user 0m7.089s 00:17:48.851 sys 0m0.909s 00:17:48.851 10:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:48.851 10:29:02 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.851 ************************************ 00:17:48.851 END TEST raid_state_function_test_sb_md_interleaved 00:17:48.851 ************************************ 00:17:49.111 10:29:02 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:17:49.111 10:29:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:49.111 10:29:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:49.111 10:29:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:49.111 ************************************ 00:17:49.111 START TEST raid_superblock_test_md_interleaved 00:17:49.111 ************************************ 00:17:49.111 10:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:49.111 10:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:49.111 10:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:49.111 10:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:49.111 10:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:49.111 10:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:49.111 10:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:49.111 10:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:49.111 10:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:49.111 10:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:49.111 10:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:49.111 10:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:49.111 10:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:49.111 10:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:49.111 10:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:49.111 10:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:49.111 10:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88375 00:17:49.111 10:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:49.111 10:29:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88375 00:17:49.111 10:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88375 ']' 00:17:49.112 10:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.112 10:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:49.112 10:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.112 10:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:49.112 10:29:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.112 [2024-11-19 10:29:02.746819] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:17:49.112 [2024-11-19 10:29:02.747006] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88375 ] 00:17:49.372 [2024-11-19 10:29:02.920848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.372 [2024-11-19 10:29:03.028654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.632 [2024-11-19 10:29:03.212509] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:49.632 [2024-11-19 10:29:03.212564] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.893 malloc1 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.893 [2024-11-19 10:29:03.608655] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:49.893 [2024-11-19 10:29:03.608748] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.893 [2024-11-19 10:29:03.608787] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:49.893 [2024-11-19 10:29:03.608815] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.893 [2024-11-19 10:29:03.610532] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.893 [2024-11-19 10:29:03.610601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:49.893 pt1 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.893 malloc2 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.893 [2024-11-19 10:29:03.661520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:49.893 [2024-11-19 10:29:03.661602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.893 [2024-11-19 10:29:03.661637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:49.893 [2024-11-19 10:29:03.661664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.893 [2024-11-19 10:29:03.663291] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.893 [2024-11-19 10:29:03.663365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:49.893 pt2 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.893 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.154 [2024-11-19 10:29:03.673538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:50.154 [2024-11-19 10:29:03.675217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:50.154 [2024-11-19 10:29:03.675399] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:50.154 [2024-11-19 10:29:03.675413] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:50.154 [2024-11-19 10:29:03.675479] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:50.154 [2024-11-19 10:29:03.675542] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:50.154 [2024-11-19 10:29:03.675552] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:50.154 [2024-11-19 10:29:03.675615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.154 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.154 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:50.154 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.154 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:50.154 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.154 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.154 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:50.154 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.154 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.154 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.154 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.154 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.154 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.154 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.154 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.154 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.154 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.154 "name": "raid_bdev1", 00:17:50.154 "uuid": "9d4b4976-99cf-4641-b5ac-3d8fc5cd67ae", 00:17:50.154 "strip_size_kb": 0, 00:17:50.154 "state": "online", 00:17:50.154 "raid_level": "raid1", 00:17:50.154 "superblock": true, 00:17:50.154 "num_base_bdevs": 2, 00:17:50.154 "num_base_bdevs_discovered": 2, 00:17:50.154 "num_base_bdevs_operational": 2, 00:17:50.154 "base_bdevs_list": [ 00:17:50.154 { 00:17:50.154 "name": "pt1", 00:17:50.154 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:50.154 "is_configured": true, 00:17:50.154 "data_offset": 256, 00:17:50.154 "data_size": 7936 00:17:50.154 }, 00:17:50.154 { 00:17:50.154 "name": "pt2", 00:17:50.154 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.154 "is_configured": true, 00:17:50.154 "data_offset": 256, 00:17:50.154 "data_size": 7936 00:17:50.154 } 00:17:50.154 ] 00:17:50.154 }' 00:17:50.154 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.154 10:29:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.415 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:50.415 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:50.415 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:50.415 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:50.415 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:50.415 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:50.415 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:50.415 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:50.415 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.415 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.415 [2024-11-19 10:29:04.140947] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:50.415 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.415 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:50.415 "name": "raid_bdev1", 00:17:50.415 "aliases": [ 00:17:50.415 "9d4b4976-99cf-4641-b5ac-3d8fc5cd67ae" 00:17:50.415 ], 00:17:50.415 "product_name": "Raid Volume", 00:17:50.415 "block_size": 4128, 00:17:50.415 "num_blocks": 7936, 00:17:50.415 "uuid": "9d4b4976-99cf-4641-b5ac-3d8fc5cd67ae", 00:17:50.415 "md_size": 32, 00:17:50.415 "md_interleave": true, 00:17:50.415 "dif_type": 0, 00:17:50.415 "assigned_rate_limits": { 00:17:50.415 "rw_ios_per_sec": 0, 00:17:50.415 "rw_mbytes_per_sec": 0, 00:17:50.415 "r_mbytes_per_sec": 0, 00:17:50.415 "w_mbytes_per_sec": 0 00:17:50.415 }, 00:17:50.415 "claimed": false, 00:17:50.415 "zoned": false, 00:17:50.415 "supported_io_types": { 00:17:50.415 "read": true, 00:17:50.415 "write": true, 00:17:50.415 "unmap": false, 00:17:50.415 "flush": false, 00:17:50.415 "reset": true, 00:17:50.415 "nvme_admin": false, 00:17:50.415 "nvme_io": false, 00:17:50.415 "nvme_io_md": false, 00:17:50.415 "write_zeroes": true, 00:17:50.415 "zcopy": false, 00:17:50.416 "get_zone_info": false, 00:17:50.416 "zone_management": false, 00:17:50.416 "zone_append": false, 00:17:50.416 "compare": false, 00:17:50.416 "compare_and_write": false, 00:17:50.416 "abort": false, 00:17:50.416 "seek_hole": false, 00:17:50.416 "seek_data": false, 00:17:50.416 "copy": false, 00:17:50.416 "nvme_iov_md": false 00:17:50.416 }, 00:17:50.416 "memory_domains": [ 00:17:50.416 { 00:17:50.416 "dma_device_id": "system", 00:17:50.416 "dma_device_type": 1 00:17:50.416 }, 00:17:50.416 { 00:17:50.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.416 "dma_device_type": 2 00:17:50.416 }, 00:17:50.416 { 00:17:50.416 "dma_device_id": "system", 00:17:50.416 "dma_device_type": 1 00:17:50.416 }, 00:17:50.416 { 00:17:50.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.416 "dma_device_type": 2 00:17:50.416 } 00:17:50.416 ], 00:17:50.416 "driver_specific": { 00:17:50.416 "raid": { 00:17:50.416 "uuid": "9d4b4976-99cf-4641-b5ac-3d8fc5cd67ae", 00:17:50.416 "strip_size_kb": 0, 00:17:50.416 "state": "online", 00:17:50.416 "raid_level": "raid1", 00:17:50.416 "superblock": true, 00:17:50.416 "num_base_bdevs": 2, 00:17:50.416 "num_base_bdevs_discovered": 2, 00:17:50.416 "num_base_bdevs_operational": 2, 00:17:50.416 "base_bdevs_list": [ 00:17:50.416 { 00:17:50.416 "name": "pt1", 00:17:50.416 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:50.416 "is_configured": true, 00:17:50.416 "data_offset": 256, 00:17:50.416 "data_size": 7936 00:17:50.416 }, 00:17:50.416 { 00:17:50.416 "name": "pt2", 00:17:50.416 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.416 "is_configured": true, 00:17:50.416 "data_offset": 256, 00:17:50.416 "data_size": 7936 00:17:50.416 } 00:17:50.416 ] 00:17:50.416 } 00:17:50.416 } 00:17:50.416 }' 00:17:50.416 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:50.676 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:50.676 pt2' 00:17:50.676 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.676 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:50.676 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.676 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:50.676 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.676 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.676 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.676 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.676 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:50.676 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:50.676 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:50.676 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:50.676 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:50.677 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.677 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.677 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.677 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:50.677 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:50.677 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:50.677 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.677 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.677 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:50.677 [2024-11-19 10:29:04.352549] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:50.677 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.677 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9d4b4976-99cf-4641-b5ac-3d8fc5cd67ae 00:17:50.677 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 9d4b4976-99cf-4641-b5ac-3d8fc5cd67ae ']' 00:17:50.677 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:50.677 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.677 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.677 [2024-11-19 10:29:04.400239] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:50.677 [2024-11-19 10:29:04.400259] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:50.677 [2024-11-19 10:29:04.400324] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:50.677 [2024-11-19 10:29:04.400370] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:50.677 [2024-11-19 10:29:04.400380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:50.677 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.677 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.677 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:50.677 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.677 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.677 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.677 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:50.677 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:50.677 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:50.677 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:50.937 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.937 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.937 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.937 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:50.937 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:50.937 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.937 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.937 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.937 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:50.937 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:50.937 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.937 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.937 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.937 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:50.937 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:50.937 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:17:50.937 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:50.937 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:50.937 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:50.937 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:50.937 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:50.937 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.938 [2024-11-19 10:29:04.536053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:50.938 [2024-11-19 10:29:04.537762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:50.938 [2024-11-19 10:29:04.537820] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:50.938 [2024-11-19 10:29:04.537864] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:50.938 [2024-11-19 10:29:04.537877] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:50.938 [2024-11-19 10:29:04.537886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:50.938 request: 00:17:50.938 { 00:17:50.938 "name": "raid_bdev1", 00:17:50.938 "raid_level": "raid1", 00:17:50.938 "base_bdevs": [ 00:17:50.938 "malloc1", 00:17:50.938 "malloc2" 00:17:50.938 ], 00:17:50.938 "superblock": false, 00:17:50.938 "method": "bdev_raid_create", 00:17:50.938 "req_id": 1 00:17:50.938 } 00:17:50.938 Got JSON-RPC error response 00:17:50.938 response: 00:17:50.938 { 00:17:50.938 "code": -17, 00:17:50.938 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:50.938 } 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.938 [2024-11-19 10:29:04.603910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:50.938 [2024-11-19 10:29:04.603998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.938 [2024-11-19 10:29:04.604029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:50.938 [2024-11-19 10:29:04.604058] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.938 [2024-11-19 10:29:04.605762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.938 [2024-11-19 10:29:04.605828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:50.938 [2024-11-19 10:29:04.605885] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:50.938 [2024-11-19 10:29:04.605952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:50.938 pt1 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.938 "name": "raid_bdev1", 00:17:50.938 "uuid": "9d4b4976-99cf-4641-b5ac-3d8fc5cd67ae", 00:17:50.938 "strip_size_kb": 0, 00:17:50.938 "state": "configuring", 00:17:50.938 "raid_level": "raid1", 00:17:50.938 "superblock": true, 00:17:50.938 "num_base_bdevs": 2, 00:17:50.938 "num_base_bdevs_discovered": 1, 00:17:50.938 "num_base_bdevs_operational": 2, 00:17:50.938 "base_bdevs_list": [ 00:17:50.938 { 00:17:50.938 "name": "pt1", 00:17:50.938 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:50.938 "is_configured": true, 00:17:50.938 "data_offset": 256, 00:17:50.938 "data_size": 7936 00:17:50.938 }, 00:17:50.938 { 00:17:50.938 "name": null, 00:17:50.938 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.938 "is_configured": false, 00:17:50.938 "data_offset": 256, 00:17:50.938 "data_size": 7936 00:17:50.938 } 00:17:50.938 ] 00:17:50.938 }' 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.938 10:29:04 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.509 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:51.509 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:51.509 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:51.509 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:51.509 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.509 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.509 [2024-11-19 10:29:05.035163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:51.509 [2024-11-19 10:29:05.035253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.509 [2024-11-19 10:29:05.035288] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:51.509 [2024-11-19 10:29:05.035319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.509 [2024-11-19 10:29:05.035447] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.509 [2024-11-19 10:29:05.035498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:51.509 [2024-11-19 10:29:05.035548] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:51.509 [2024-11-19 10:29:05.035613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:51.509 [2024-11-19 10:29:05.035712] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:51.509 [2024-11-19 10:29:05.035750] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:51.509 [2024-11-19 10:29:05.035828] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:51.509 [2024-11-19 10:29:05.035926] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:51.509 [2024-11-19 10:29:05.035963] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:51.509 [2024-11-19 10:29:05.036063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.509 pt2 00:17:51.509 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.509 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:51.509 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:51.509 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:51.509 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.509 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.509 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.509 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.509 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:51.509 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.509 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.509 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.509 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.509 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.509 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.509 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.509 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.509 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.509 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.509 "name": "raid_bdev1", 00:17:51.509 "uuid": "9d4b4976-99cf-4641-b5ac-3d8fc5cd67ae", 00:17:51.509 "strip_size_kb": 0, 00:17:51.509 "state": "online", 00:17:51.509 "raid_level": "raid1", 00:17:51.509 "superblock": true, 00:17:51.509 "num_base_bdevs": 2, 00:17:51.509 "num_base_bdevs_discovered": 2, 00:17:51.509 "num_base_bdevs_operational": 2, 00:17:51.509 "base_bdevs_list": [ 00:17:51.509 { 00:17:51.509 "name": "pt1", 00:17:51.509 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:51.509 "is_configured": true, 00:17:51.509 "data_offset": 256, 00:17:51.509 "data_size": 7936 00:17:51.509 }, 00:17:51.509 { 00:17:51.510 "name": "pt2", 00:17:51.510 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:51.510 "is_configured": true, 00:17:51.510 "data_offset": 256, 00:17:51.510 "data_size": 7936 00:17:51.510 } 00:17:51.510 ] 00:17:51.510 }' 00:17:51.510 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.510 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.770 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:51.770 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:51.770 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:51.770 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:51.770 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:51.770 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:51.770 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:51.770 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:51.770 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.770 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.770 [2024-11-19 10:29:05.462649] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:51.770 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.770 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:51.770 "name": "raid_bdev1", 00:17:51.770 "aliases": [ 00:17:51.770 "9d4b4976-99cf-4641-b5ac-3d8fc5cd67ae" 00:17:51.770 ], 00:17:51.770 "product_name": "Raid Volume", 00:17:51.770 "block_size": 4128, 00:17:51.770 "num_blocks": 7936, 00:17:51.770 "uuid": "9d4b4976-99cf-4641-b5ac-3d8fc5cd67ae", 00:17:51.770 "md_size": 32, 00:17:51.770 "md_interleave": true, 00:17:51.770 "dif_type": 0, 00:17:51.770 "assigned_rate_limits": { 00:17:51.770 "rw_ios_per_sec": 0, 00:17:51.770 "rw_mbytes_per_sec": 0, 00:17:51.770 "r_mbytes_per_sec": 0, 00:17:51.770 "w_mbytes_per_sec": 0 00:17:51.770 }, 00:17:51.770 "claimed": false, 00:17:51.770 "zoned": false, 00:17:51.770 "supported_io_types": { 00:17:51.770 "read": true, 00:17:51.770 "write": true, 00:17:51.770 "unmap": false, 00:17:51.770 "flush": false, 00:17:51.770 "reset": true, 00:17:51.770 "nvme_admin": false, 00:17:51.770 "nvme_io": false, 00:17:51.770 "nvme_io_md": false, 00:17:51.770 "write_zeroes": true, 00:17:51.770 "zcopy": false, 00:17:51.770 "get_zone_info": false, 00:17:51.770 "zone_management": false, 00:17:51.770 "zone_append": false, 00:17:51.770 "compare": false, 00:17:51.770 "compare_and_write": false, 00:17:51.770 "abort": false, 00:17:51.770 "seek_hole": false, 00:17:51.770 "seek_data": false, 00:17:51.770 "copy": false, 00:17:51.770 "nvme_iov_md": false 00:17:51.770 }, 00:17:51.770 "memory_domains": [ 00:17:51.770 { 00:17:51.770 "dma_device_id": "system", 00:17:51.770 "dma_device_type": 1 00:17:51.770 }, 00:17:51.770 { 00:17:51.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.770 "dma_device_type": 2 00:17:51.770 }, 00:17:51.770 { 00:17:51.770 "dma_device_id": "system", 00:17:51.770 "dma_device_type": 1 00:17:51.770 }, 00:17:51.770 { 00:17:51.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.770 "dma_device_type": 2 00:17:51.770 } 00:17:51.770 ], 00:17:51.770 "driver_specific": { 00:17:51.770 "raid": { 00:17:51.770 "uuid": "9d4b4976-99cf-4641-b5ac-3d8fc5cd67ae", 00:17:51.770 "strip_size_kb": 0, 00:17:51.770 "state": "online", 00:17:51.770 "raid_level": "raid1", 00:17:51.770 "superblock": true, 00:17:51.770 "num_base_bdevs": 2, 00:17:51.770 "num_base_bdevs_discovered": 2, 00:17:51.770 "num_base_bdevs_operational": 2, 00:17:51.770 "base_bdevs_list": [ 00:17:51.770 { 00:17:51.770 "name": "pt1", 00:17:51.770 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:51.770 "is_configured": true, 00:17:51.770 "data_offset": 256, 00:17:51.770 "data_size": 7936 00:17:51.770 }, 00:17:51.770 { 00:17:51.770 "name": "pt2", 00:17:51.770 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:51.770 "is_configured": true, 00:17:51.770 "data_offset": 256, 00:17:51.770 "data_size": 7936 00:17:51.770 } 00:17:51.770 ] 00:17:51.770 } 00:17:51.770 } 00:17:51.770 }' 00:17:51.771 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:51.771 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:51.771 pt2' 00:17:51.771 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.031 [2024-11-19 10:29:05.678321] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 9d4b4976-99cf-4641-b5ac-3d8fc5cd67ae '!=' 9d4b4976-99cf-4641-b5ac-3d8fc5cd67ae ']' 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.031 [2024-11-19 10:29:05.722104] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.031 "name": "raid_bdev1", 00:17:52.031 "uuid": "9d4b4976-99cf-4641-b5ac-3d8fc5cd67ae", 00:17:52.031 "strip_size_kb": 0, 00:17:52.031 "state": "online", 00:17:52.031 "raid_level": "raid1", 00:17:52.031 "superblock": true, 00:17:52.031 "num_base_bdevs": 2, 00:17:52.031 "num_base_bdevs_discovered": 1, 00:17:52.031 "num_base_bdevs_operational": 1, 00:17:52.031 "base_bdevs_list": [ 00:17:52.031 { 00:17:52.031 "name": null, 00:17:52.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.031 "is_configured": false, 00:17:52.031 "data_offset": 0, 00:17:52.031 "data_size": 7936 00:17:52.031 }, 00:17:52.031 { 00:17:52.031 "name": "pt2", 00:17:52.031 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:52.031 "is_configured": true, 00:17:52.031 "data_offset": 256, 00:17:52.031 "data_size": 7936 00:17:52.031 } 00:17:52.031 ] 00:17:52.031 }' 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.031 10:29:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.602 [2024-11-19 10:29:06.173279] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:52.602 [2024-11-19 10:29:06.173301] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:52.602 [2024-11-19 10:29:06.173347] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:52.602 [2024-11-19 10:29:06.173380] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:52.602 [2024-11-19 10:29:06.173391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.602 [2024-11-19 10:29:06.245171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:52.602 [2024-11-19 10:29:06.245214] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:52.602 [2024-11-19 10:29:06.245227] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:52.602 [2024-11-19 10:29:06.245236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:52.602 [2024-11-19 10:29:06.247035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:52.602 [2024-11-19 10:29:06.247071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:52.602 [2024-11-19 10:29:06.247111] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:52.602 [2024-11-19 10:29:06.247149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:52.602 [2024-11-19 10:29:06.247199] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:52.602 [2024-11-19 10:29:06.247209] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:52.602 [2024-11-19 10:29:06.247286] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:52.602 [2024-11-19 10:29:06.247361] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:52.602 [2024-11-19 10:29:06.247368] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:52.602 [2024-11-19 10:29:06.247419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.602 pt2 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.602 "name": "raid_bdev1", 00:17:52.602 "uuid": "9d4b4976-99cf-4641-b5ac-3d8fc5cd67ae", 00:17:52.602 "strip_size_kb": 0, 00:17:52.602 "state": "online", 00:17:52.602 "raid_level": "raid1", 00:17:52.602 "superblock": true, 00:17:52.602 "num_base_bdevs": 2, 00:17:52.602 "num_base_bdevs_discovered": 1, 00:17:52.602 "num_base_bdevs_operational": 1, 00:17:52.602 "base_bdevs_list": [ 00:17:52.602 { 00:17:52.602 "name": null, 00:17:52.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.602 "is_configured": false, 00:17:52.602 "data_offset": 256, 00:17:52.602 "data_size": 7936 00:17:52.602 }, 00:17:52.602 { 00:17:52.602 "name": "pt2", 00:17:52.602 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:52.602 "is_configured": true, 00:17:52.602 "data_offset": 256, 00:17:52.602 "data_size": 7936 00:17:52.602 } 00:17:52.602 ] 00:17:52.602 }' 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.602 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.171 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:53.171 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.171 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.171 [2024-11-19 10:29:06.688392] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:53.171 [2024-11-19 10:29:06.688454] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:53.171 [2024-11-19 10:29:06.688513] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:53.171 [2024-11-19 10:29:06.688563] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:53.171 [2024-11-19 10:29:06.688606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:53.171 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.171 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:53.172 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.172 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.172 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.172 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.172 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:53.172 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:53.172 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:53.172 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:53.172 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.172 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.172 [2024-11-19 10:29:06.736342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:53.172 [2024-11-19 10:29:06.736425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.172 [2024-11-19 10:29:06.736458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:53.172 [2024-11-19 10:29:06.736486] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.172 [2024-11-19 10:29:06.738211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.172 [2024-11-19 10:29:06.738274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:53.172 [2024-11-19 10:29:06.738333] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:53.172 [2024-11-19 10:29:06.738382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:53.172 [2024-11-19 10:29:06.738469] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:53.172 [2024-11-19 10:29:06.738503] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:53.172 [2024-11-19 10:29:06.738528] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:53.172 [2024-11-19 10:29:06.738634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:53.172 [2024-11-19 10:29:06.738715] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:53.172 [2024-11-19 10:29:06.738751] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:53.172 [2024-11-19 10:29:06.738819] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:53.172 [2024-11-19 10:29:06.738902] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:53.172 [2024-11-19 10:29:06.738941] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:53.172 [2024-11-19 10:29:06.739047] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.172 pt1 00:17:53.172 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.172 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:53.172 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:53.172 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.172 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.172 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.172 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.172 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:53.172 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.172 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.172 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.172 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.172 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.172 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.172 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.172 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.172 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.172 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.172 "name": "raid_bdev1", 00:17:53.172 "uuid": "9d4b4976-99cf-4641-b5ac-3d8fc5cd67ae", 00:17:53.172 "strip_size_kb": 0, 00:17:53.172 "state": "online", 00:17:53.172 "raid_level": "raid1", 00:17:53.172 "superblock": true, 00:17:53.172 "num_base_bdevs": 2, 00:17:53.172 "num_base_bdevs_discovered": 1, 00:17:53.172 "num_base_bdevs_operational": 1, 00:17:53.172 "base_bdevs_list": [ 00:17:53.172 { 00:17:53.172 "name": null, 00:17:53.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.172 "is_configured": false, 00:17:53.172 "data_offset": 256, 00:17:53.172 "data_size": 7936 00:17:53.172 }, 00:17:53.172 { 00:17:53.172 "name": "pt2", 00:17:53.172 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:53.172 "is_configured": true, 00:17:53.172 "data_offset": 256, 00:17:53.172 "data_size": 7936 00:17:53.172 } 00:17:53.172 ] 00:17:53.172 }' 00:17:53.172 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.172 10:29:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.769 10:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:53.769 10:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.769 10:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.769 10:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:53.769 10:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.769 10:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:53.769 10:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:53.769 10:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:53.769 10:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.769 10:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.769 [2024-11-19 10:29:07.283592] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:53.769 10:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.769 10:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 9d4b4976-99cf-4641-b5ac-3d8fc5cd67ae '!=' 9d4b4976-99cf-4641-b5ac-3d8fc5cd67ae ']' 00:17:53.769 10:29:07 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88375 00:17:53.769 10:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88375 ']' 00:17:53.769 10:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88375 00:17:53.769 10:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:17:53.769 10:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.769 10:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88375 00:17:53.769 killing process with pid 88375 00:17:53.769 10:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:53.769 10:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:53.769 10:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88375' 00:17:53.769 10:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88375 00:17:53.769 [2024-11-19 10:29:07.367173] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:53.769 [2024-11-19 10:29:07.367237] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:53.769 [2024-11-19 10:29:07.367270] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:53.769 [2024-11-19 10:29:07.367282] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:53.769 10:29:07 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88375 00:17:54.029 [2024-11-19 10:29:07.557749] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:54.971 10:29:08 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:17:54.971 00:17:54.971 real 0m5.932s 00:17:54.971 user 0m9.003s 00:17:54.971 sys 0m1.134s 00:17:54.971 ************************************ 00:17:54.971 END TEST raid_superblock_test_md_interleaved 00:17:54.971 ************************************ 00:17:54.971 10:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:54.971 10:29:08 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.971 10:29:08 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:17:54.971 10:29:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:54.971 10:29:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:54.971 10:29:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:54.971 ************************************ 00:17:54.971 START TEST raid_rebuild_test_sb_md_interleaved 00:17:54.971 ************************************ 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=88698 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 88698 00:17:54.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88698 ']' 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.971 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.972 10:29:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.231 [2024-11-19 10:29:08.762137] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:17:55.231 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:55.231 Zero copy mechanism will not be used. 00:17:55.231 [2024-11-19 10:29:08.762303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88698 ] 00:17:55.232 [2024-11-19 10:29:08.935234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.491 [2024-11-19 10:29:09.039771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.491 [2024-11-19 10:29:09.229328] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:55.491 [2024-11-19 10:29:09.229366] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.059 BaseBdev1_malloc 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.059 [2024-11-19 10:29:09.621505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:56.059 [2024-11-19 10:29:09.621559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.059 [2024-11-19 10:29:09.621579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:56.059 [2024-11-19 10:29:09.621589] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.059 [2024-11-19 10:29:09.623299] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.059 [2024-11-19 10:29:09.623406] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:56.059 BaseBdev1 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.059 BaseBdev2_malloc 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.059 [2024-11-19 10:29:09.671280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:56.059 [2024-11-19 10:29:09.671337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.059 [2024-11-19 10:29:09.671362] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:56.059 [2024-11-19 10:29:09.671390] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.059 [2024-11-19 10:29:09.673054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.059 [2024-11-19 10:29:09.673088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:56.059 BaseBdev2 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.059 spare_malloc 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.059 spare_delay 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.059 [2024-11-19 10:29:09.763775] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:56.059 [2024-11-19 10:29:09.763825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.059 [2024-11-19 10:29:09.763844] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:56.059 [2024-11-19 10:29:09.763854] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.059 [2024-11-19 10:29:09.765581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.059 [2024-11-19 10:29:09.765619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:56.059 spare 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.059 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.059 [2024-11-19 10:29:09.775789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:56.059 [2024-11-19 10:29:09.777443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:56.059 [2024-11-19 10:29:09.777613] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:56.059 [2024-11-19 10:29:09.777626] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:56.059 [2024-11-19 10:29:09.777696] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:56.060 [2024-11-19 10:29:09.777758] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:56.060 [2024-11-19 10:29:09.777766] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:56.060 [2024-11-19 10:29:09.777824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.060 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.060 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:56.060 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.060 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.060 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.060 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.060 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:56.060 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.060 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.060 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.060 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.060 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.060 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.060 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.060 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.060 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.060 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.060 "name": "raid_bdev1", 00:17:56.060 "uuid": "4876bd0d-7895-4687-b8cf-8e7b3d850377", 00:17:56.060 "strip_size_kb": 0, 00:17:56.060 "state": "online", 00:17:56.060 "raid_level": "raid1", 00:17:56.060 "superblock": true, 00:17:56.060 "num_base_bdevs": 2, 00:17:56.060 "num_base_bdevs_discovered": 2, 00:17:56.060 "num_base_bdevs_operational": 2, 00:17:56.060 "base_bdevs_list": [ 00:17:56.060 { 00:17:56.060 "name": "BaseBdev1", 00:17:56.060 "uuid": "c4ad4122-fc43-5ed4-bbac-7eff1d079508", 00:17:56.060 "is_configured": true, 00:17:56.060 "data_offset": 256, 00:17:56.060 "data_size": 7936 00:17:56.060 }, 00:17:56.060 { 00:17:56.060 "name": "BaseBdev2", 00:17:56.060 "uuid": "584bd028-ebdb-5f89-942f-e451b06d519c", 00:17:56.060 "is_configured": true, 00:17:56.060 "data_offset": 256, 00:17:56.060 "data_size": 7936 00:17:56.060 } 00:17:56.060 ] 00:17:56.060 }' 00:17:56.060 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.060 10:29:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.630 [2024-11-19 10:29:10.231339] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.630 [2024-11-19 10:29:10.330979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.630 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.630 "name": "raid_bdev1", 00:17:56.630 "uuid": "4876bd0d-7895-4687-b8cf-8e7b3d850377", 00:17:56.630 "strip_size_kb": 0, 00:17:56.630 "state": "online", 00:17:56.630 "raid_level": "raid1", 00:17:56.630 "superblock": true, 00:17:56.630 "num_base_bdevs": 2, 00:17:56.630 "num_base_bdevs_discovered": 1, 00:17:56.630 "num_base_bdevs_operational": 1, 00:17:56.630 "base_bdevs_list": [ 00:17:56.630 { 00:17:56.630 "name": null, 00:17:56.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.631 "is_configured": false, 00:17:56.631 "data_offset": 0, 00:17:56.631 "data_size": 7936 00:17:56.631 }, 00:17:56.631 { 00:17:56.631 "name": "BaseBdev2", 00:17:56.631 "uuid": "584bd028-ebdb-5f89-942f-e451b06d519c", 00:17:56.631 "is_configured": true, 00:17:56.631 "data_offset": 256, 00:17:56.631 "data_size": 7936 00:17:56.631 } 00:17:56.631 ] 00:17:56.631 }' 00:17:56.631 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.631 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.201 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:57.201 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.201 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:57.201 [2024-11-19 10:29:10.762239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:57.201 [2024-11-19 10:29:10.779454] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:57.201 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.201 10:29:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:57.201 [2024-11-19 10:29:10.781223] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:58.142 10:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:58.143 10:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.143 10:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:58.143 10:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:58.143 10:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.143 10:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.143 10:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.143 10:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.143 10:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.143 10:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.143 10:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.143 "name": "raid_bdev1", 00:17:58.143 "uuid": "4876bd0d-7895-4687-b8cf-8e7b3d850377", 00:17:58.143 "strip_size_kb": 0, 00:17:58.143 "state": "online", 00:17:58.143 "raid_level": "raid1", 00:17:58.143 "superblock": true, 00:17:58.143 "num_base_bdevs": 2, 00:17:58.143 "num_base_bdevs_discovered": 2, 00:17:58.143 "num_base_bdevs_operational": 2, 00:17:58.143 "process": { 00:17:58.143 "type": "rebuild", 00:17:58.143 "target": "spare", 00:17:58.143 "progress": { 00:17:58.143 "blocks": 2560, 00:17:58.143 "percent": 32 00:17:58.143 } 00:17:58.143 }, 00:17:58.143 "base_bdevs_list": [ 00:17:58.143 { 00:17:58.143 "name": "spare", 00:17:58.143 "uuid": "fb3c93a1-fa25-59f2-81ec-49e7b0e6d381", 00:17:58.143 "is_configured": true, 00:17:58.143 "data_offset": 256, 00:17:58.143 "data_size": 7936 00:17:58.143 }, 00:17:58.143 { 00:17:58.143 "name": "BaseBdev2", 00:17:58.143 "uuid": "584bd028-ebdb-5f89-942f-e451b06d519c", 00:17:58.143 "is_configured": true, 00:17:58.143 "data_offset": 256, 00:17:58.143 "data_size": 7936 00:17:58.143 } 00:17:58.143 ] 00:17:58.143 }' 00:17:58.143 10:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.143 10:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:58.143 10:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.403 10:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:58.403 10:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:58.403 10:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.403 10:29:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.403 [2024-11-19 10:29:11.944671] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:58.403 [2024-11-19 10:29:11.985601] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:58.403 [2024-11-19 10:29:11.985698] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.404 [2024-11-19 10:29:11.985728] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:58.404 [2024-11-19 10:29:11.985753] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:58.404 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.404 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:58.404 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.404 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.404 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.404 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.404 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:58.404 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.404 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.404 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.404 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.404 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.404 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.404 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.404 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.404 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.404 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.404 "name": "raid_bdev1", 00:17:58.404 "uuid": "4876bd0d-7895-4687-b8cf-8e7b3d850377", 00:17:58.404 "strip_size_kb": 0, 00:17:58.404 "state": "online", 00:17:58.404 "raid_level": "raid1", 00:17:58.404 "superblock": true, 00:17:58.404 "num_base_bdevs": 2, 00:17:58.404 "num_base_bdevs_discovered": 1, 00:17:58.404 "num_base_bdevs_operational": 1, 00:17:58.404 "base_bdevs_list": [ 00:17:58.404 { 00:17:58.404 "name": null, 00:17:58.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.404 "is_configured": false, 00:17:58.404 "data_offset": 0, 00:17:58.404 "data_size": 7936 00:17:58.404 }, 00:17:58.404 { 00:17:58.404 "name": "BaseBdev2", 00:17:58.404 "uuid": "584bd028-ebdb-5f89-942f-e451b06d519c", 00:17:58.404 "is_configured": true, 00:17:58.404 "data_offset": 256, 00:17:58.404 "data_size": 7936 00:17:58.404 } 00:17:58.404 ] 00:17:58.404 }' 00:17:58.404 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.404 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.974 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:58.974 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.974 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:58.974 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:58.974 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.974 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.974 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.974 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.974 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.974 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.974 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.974 "name": "raid_bdev1", 00:17:58.974 "uuid": "4876bd0d-7895-4687-b8cf-8e7b3d850377", 00:17:58.974 "strip_size_kb": 0, 00:17:58.974 "state": "online", 00:17:58.974 "raid_level": "raid1", 00:17:58.974 "superblock": true, 00:17:58.974 "num_base_bdevs": 2, 00:17:58.974 "num_base_bdevs_discovered": 1, 00:17:58.974 "num_base_bdevs_operational": 1, 00:17:58.974 "base_bdevs_list": [ 00:17:58.974 { 00:17:58.974 "name": null, 00:17:58.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.974 "is_configured": false, 00:17:58.974 "data_offset": 0, 00:17:58.974 "data_size": 7936 00:17:58.974 }, 00:17:58.974 { 00:17:58.974 "name": "BaseBdev2", 00:17:58.974 "uuid": "584bd028-ebdb-5f89-942f-e451b06d519c", 00:17:58.974 "is_configured": true, 00:17:58.974 "data_offset": 256, 00:17:58.974 "data_size": 7936 00:17:58.974 } 00:17:58.974 ] 00:17:58.974 }' 00:17:58.974 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.974 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:58.974 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.974 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:58.974 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:58.974 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.974 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.974 [2024-11-19 10:29:12.653939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:58.974 [2024-11-19 10:29:12.668982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:58.974 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.974 10:29:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:58.974 [2024-11-19 10:29:12.670693] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:59.915 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:59.915 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.915 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:59.915 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:59.916 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.916 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.916 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.916 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.916 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.176 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.176 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.176 "name": "raid_bdev1", 00:18:00.176 "uuid": "4876bd0d-7895-4687-b8cf-8e7b3d850377", 00:18:00.176 "strip_size_kb": 0, 00:18:00.176 "state": "online", 00:18:00.176 "raid_level": "raid1", 00:18:00.176 "superblock": true, 00:18:00.176 "num_base_bdevs": 2, 00:18:00.176 "num_base_bdevs_discovered": 2, 00:18:00.176 "num_base_bdevs_operational": 2, 00:18:00.176 "process": { 00:18:00.176 "type": "rebuild", 00:18:00.176 "target": "spare", 00:18:00.176 "progress": { 00:18:00.176 "blocks": 2560, 00:18:00.176 "percent": 32 00:18:00.176 } 00:18:00.176 }, 00:18:00.176 "base_bdevs_list": [ 00:18:00.176 { 00:18:00.176 "name": "spare", 00:18:00.176 "uuid": "fb3c93a1-fa25-59f2-81ec-49e7b0e6d381", 00:18:00.176 "is_configured": true, 00:18:00.176 "data_offset": 256, 00:18:00.176 "data_size": 7936 00:18:00.176 }, 00:18:00.176 { 00:18:00.176 "name": "BaseBdev2", 00:18:00.176 "uuid": "584bd028-ebdb-5f89-942f-e451b06d519c", 00:18:00.176 "is_configured": true, 00:18:00.176 "data_offset": 256, 00:18:00.176 "data_size": 7936 00:18:00.176 } 00:18:00.176 ] 00:18:00.176 }' 00:18:00.176 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.176 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:00.176 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.177 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.177 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:00.177 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:00.177 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:00.177 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:00.177 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:00.177 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:00.177 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=718 00:18:00.177 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:00.177 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.177 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.177 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:00.177 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:00.177 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.177 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.177 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.177 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.177 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:00.177 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.177 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.177 "name": "raid_bdev1", 00:18:00.177 "uuid": "4876bd0d-7895-4687-b8cf-8e7b3d850377", 00:18:00.177 "strip_size_kb": 0, 00:18:00.177 "state": "online", 00:18:00.177 "raid_level": "raid1", 00:18:00.177 "superblock": true, 00:18:00.177 "num_base_bdevs": 2, 00:18:00.177 "num_base_bdevs_discovered": 2, 00:18:00.177 "num_base_bdevs_operational": 2, 00:18:00.177 "process": { 00:18:00.177 "type": "rebuild", 00:18:00.177 "target": "spare", 00:18:00.177 "progress": { 00:18:00.177 "blocks": 2816, 00:18:00.177 "percent": 35 00:18:00.177 } 00:18:00.177 }, 00:18:00.177 "base_bdevs_list": [ 00:18:00.177 { 00:18:00.177 "name": "spare", 00:18:00.177 "uuid": "fb3c93a1-fa25-59f2-81ec-49e7b0e6d381", 00:18:00.177 "is_configured": true, 00:18:00.177 "data_offset": 256, 00:18:00.177 "data_size": 7936 00:18:00.177 }, 00:18:00.177 { 00:18:00.177 "name": "BaseBdev2", 00:18:00.177 "uuid": "584bd028-ebdb-5f89-942f-e451b06d519c", 00:18:00.177 "is_configured": true, 00:18:00.177 "data_offset": 256, 00:18:00.177 "data_size": 7936 00:18:00.177 } 00:18:00.177 ] 00:18:00.177 }' 00:18:00.177 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.177 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:00.177 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.438 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.438 10:29:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:01.379 10:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:01.379 10:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:01.379 10:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.379 10:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:01.379 10:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:01.379 10:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.379 10:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.379 10:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.379 10:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.379 10:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:01.379 10:29:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.379 10:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.379 "name": "raid_bdev1", 00:18:01.379 "uuid": "4876bd0d-7895-4687-b8cf-8e7b3d850377", 00:18:01.379 "strip_size_kb": 0, 00:18:01.379 "state": "online", 00:18:01.379 "raid_level": "raid1", 00:18:01.379 "superblock": true, 00:18:01.379 "num_base_bdevs": 2, 00:18:01.379 "num_base_bdevs_discovered": 2, 00:18:01.379 "num_base_bdevs_operational": 2, 00:18:01.379 "process": { 00:18:01.379 "type": "rebuild", 00:18:01.379 "target": "spare", 00:18:01.379 "progress": { 00:18:01.379 "blocks": 5632, 00:18:01.379 "percent": 70 00:18:01.379 } 00:18:01.379 }, 00:18:01.379 "base_bdevs_list": [ 00:18:01.379 { 00:18:01.379 "name": "spare", 00:18:01.379 "uuid": "fb3c93a1-fa25-59f2-81ec-49e7b0e6d381", 00:18:01.379 "is_configured": true, 00:18:01.379 "data_offset": 256, 00:18:01.379 "data_size": 7936 00:18:01.379 }, 00:18:01.379 { 00:18:01.379 "name": "BaseBdev2", 00:18:01.379 "uuid": "584bd028-ebdb-5f89-942f-e451b06d519c", 00:18:01.379 "is_configured": true, 00:18:01.379 "data_offset": 256, 00:18:01.379 "data_size": 7936 00:18:01.379 } 00:18:01.379 ] 00:18:01.379 }' 00:18:01.379 10:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.379 10:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:01.379 10:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.379 10:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:01.379 10:29:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:02.320 [2024-11-19 10:29:15.781653] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:02.320 [2024-11-19 10:29:15.781767] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:02.320 [2024-11-19 10:29:15.781876] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.580 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:02.581 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:02.581 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.581 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:02.581 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:02.581 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.581 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.581 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.581 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.581 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.581 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.581 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.581 "name": "raid_bdev1", 00:18:02.581 "uuid": "4876bd0d-7895-4687-b8cf-8e7b3d850377", 00:18:02.581 "strip_size_kb": 0, 00:18:02.581 "state": "online", 00:18:02.581 "raid_level": "raid1", 00:18:02.581 "superblock": true, 00:18:02.581 "num_base_bdevs": 2, 00:18:02.581 "num_base_bdevs_discovered": 2, 00:18:02.581 "num_base_bdevs_operational": 2, 00:18:02.581 "base_bdevs_list": [ 00:18:02.581 { 00:18:02.581 "name": "spare", 00:18:02.581 "uuid": "fb3c93a1-fa25-59f2-81ec-49e7b0e6d381", 00:18:02.581 "is_configured": true, 00:18:02.581 "data_offset": 256, 00:18:02.581 "data_size": 7936 00:18:02.581 }, 00:18:02.581 { 00:18:02.581 "name": "BaseBdev2", 00:18:02.581 "uuid": "584bd028-ebdb-5f89-942f-e451b06d519c", 00:18:02.581 "is_configured": true, 00:18:02.581 "data_offset": 256, 00:18:02.581 "data_size": 7936 00:18:02.581 } 00:18:02.581 ] 00:18:02.581 }' 00:18:02.581 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.581 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:02.581 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.581 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:02.581 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:18:02.581 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:02.581 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.581 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:02.581 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:02.581 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.581 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.581 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.581 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.581 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.581 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.581 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.581 "name": "raid_bdev1", 00:18:02.581 "uuid": "4876bd0d-7895-4687-b8cf-8e7b3d850377", 00:18:02.581 "strip_size_kb": 0, 00:18:02.581 "state": "online", 00:18:02.581 "raid_level": "raid1", 00:18:02.581 "superblock": true, 00:18:02.581 "num_base_bdevs": 2, 00:18:02.581 "num_base_bdevs_discovered": 2, 00:18:02.581 "num_base_bdevs_operational": 2, 00:18:02.581 "base_bdevs_list": [ 00:18:02.581 { 00:18:02.581 "name": "spare", 00:18:02.581 "uuid": "fb3c93a1-fa25-59f2-81ec-49e7b0e6d381", 00:18:02.581 "is_configured": true, 00:18:02.581 "data_offset": 256, 00:18:02.581 "data_size": 7936 00:18:02.581 }, 00:18:02.581 { 00:18:02.581 "name": "BaseBdev2", 00:18:02.581 "uuid": "584bd028-ebdb-5f89-942f-e451b06d519c", 00:18:02.581 "is_configured": true, 00:18:02.581 "data_offset": 256, 00:18:02.581 "data_size": 7936 00:18:02.581 } 00:18:02.581 ] 00:18:02.581 }' 00:18:02.581 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.841 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:02.841 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.841 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:02.841 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:02.841 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.841 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.841 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.841 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.841 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:02.841 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.841 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.841 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.841 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.841 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.841 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.841 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.841 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:02.841 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.841 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.841 "name": "raid_bdev1", 00:18:02.841 "uuid": "4876bd0d-7895-4687-b8cf-8e7b3d850377", 00:18:02.841 "strip_size_kb": 0, 00:18:02.841 "state": "online", 00:18:02.841 "raid_level": "raid1", 00:18:02.841 "superblock": true, 00:18:02.841 "num_base_bdevs": 2, 00:18:02.841 "num_base_bdevs_discovered": 2, 00:18:02.841 "num_base_bdevs_operational": 2, 00:18:02.841 "base_bdevs_list": [ 00:18:02.841 { 00:18:02.841 "name": "spare", 00:18:02.841 "uuid": "fb3c93a1-fa25-59f2-81ec-49e7b0e6d381", 00:18:02.841 "is_configured": true, 00:18:02.841 "data_offset": 256, 00:18:02.841 "data_size": 7936 00:18:02.841 }, 00:18:02.841 { 00:18:02.841 "name": "BaseBdev2", 00:18:02.841 "uuid": "584bd028-ebdb-5f89-942f-e451b06d519c", 00:18:02.841 "is_configured": true, 00:18:02.841 "data_offset": 256, 00:18:02.841 "data_size": 7936 00:18:02.842 } 00:18:02.842 ] 00:18:02.842 }' 00:18:02.842 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.842 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.102 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:03.102 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.102 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.363 [2024-11-19 10:29:16.884182] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:03.363 [2024-11-19 10:29:16.884253] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:03.363 [2024-11-19 10:29:16.884349] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:03.363 [2024-11-19 10:29:16.884431] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:03.363 [2024-11-19 10:29:16.884487] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:03.363 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.363 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.363 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:18:03.363 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.363 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.363 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.363 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:03.363 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:03.363 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:03.363 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:03.363 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.363 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.363 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.363 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:03.363 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.363 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.363 [2024-11-19 10:29:16.960077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:03.363 [2024-11-19 10:29:16.960121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.363 [2024-11-19 10:29:16.960139] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:03.363 [2024-11-19 10:29:16.960148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.363 [2024-11-19 10:29:16.962055] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.363 [2024-11-19 10:29:16.962088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:03.363 [2024-11-19 10:29:16.962137] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:03.363 [2024-11-19 10:29:16.962194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:03.363 [2024-11-19 10:29:16.962287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:03.363 spare 00:18:03.363 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.363 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:03.363 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.363 10:29:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.363 [2024-11-19 10:29:17.062168] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:03.363 [2024-11-19 10:29:17.062192] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:03.363 [2024-11-19 10:29:17.062272] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:03.363 [2024-11-19 10:29:17.062339] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:03.363 [2024-11-19 10:29:17.062348] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:03.363 [2024-11-19 10:29:17.062413] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.363 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.363 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:03.363 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.363 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.363 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:03.363 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:03.363 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:03.363 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.363 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.363 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.363 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.363 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.363 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.363 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.363 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.363 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.363 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.363 "name": "raid_bdev1", 00:18:03.363 "uuid": "4876bd0d-7895-4687-b8cf-8e7b3d850377", 00:18:03.363 "strip_size_kb": 0, 00:18:03.363 "state": "online", 00:18:03.363 "raid_level": "raid1", 00:18:03.363 "superblock": true, 00:18:03.363 "num_base_bdevs": 2, 00:18:03.363 "num_base_bdevs_discovered": 2, 00:18:03.363 "num_base_bdevs_operational": 2, 00:18:03.363 "base_bdevs_list": [ 00:18:03.363 { 00:18:03.363 "name": "spare", 00:18:03.363 "uuid": "fb3c93a1-fa25-59f2-81ec-49e7b0e6d381", 00:18:03.363 "is_configured": true, 00:18:03.363 "data_offset": 256, 00:18:03.363 "data_size": 7936 00:18:03.363 }, 00:18:03.363 { 00:18:03.363 "name": "BaseBdev2", 00:18:03.363 "uuid": "584bd028-ebdb-5f89-942f-e451b06d519c", 00:18:03.363 "is_configured": true, 00:18:03.363 "data_offset": 256, 00:18:03.363 "data_size": 7936 00:18:03.363 } 00:18:03.363 ] 00:18:03.363 }' 00:18:03.363 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.363 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.932 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:03.932 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.932 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:03.932 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:03.932 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.932 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.932 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.932 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.932 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.932 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.932 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.932 "name": "raid_bdev1", 00:18:03.932 "uuid": "4876bd0d-7895-4687-b8cf-8e7b3d850377", 00:18:03.932 "strip_size_kb": 0, 00:18:03.932 "state": "online", 00:18:03.932 "raid_level": "raid1", 00:18:03.932 "superblock": true, 00:18:03.932 "num_base_bdevs": 2, 00:18:03.932 "num_base_bdevs_discovered": 2, 00:18:03.933 "num_base_bdevs_operational": 2, 00:18:03.933 "base_bdevs_list": [ 00:18:03.933 { 00:18:03.933 "name": "spare", 00:18:03.933 "uuid": "fb3c93a1-fa25-59f2-81ec-49e7b0e6d381", 00:18:03.933 "is_configured": true, 00:18:03.933 "data_offset": 256, 00:18:03.933 "data_size": 7936 00:18:03.933 }, 00:18:03.933 { 00:18:03.933 "name": "BaseBdev2", 00:18:03.933 "uuid": "584bd028-ebdb-5f89-942f-e451b06d519c", 00:18:03.933 "is_configured": true, 00:18:03.933 "data_offset": 256, 00:18:03.933 "data_size": 7936 00:18:03.933 } 00:18:03.933 ] 00:18:03.933 }' 00:18:03.933 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.933 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:03.933 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.933 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:03.933 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:03.933 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.933 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.933 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.933 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.192 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:04.192 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:04.192 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.192 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.192 [2024-11-19 10:29:17.731223] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:04.192 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.192 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:04.192 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.192 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.192 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.192 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.192 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:04.192 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.192 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.192 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.192 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.192 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.192 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.192 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.192 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.192 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.192 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.192 "name": "raid_bdev1", 00:18:04.192 "uuid": "4876bd0d-7895-4687-b8cf-8e7b3d850377", 00:18:04.192 "strip_size_kb": 0, 00:18:04.192 "state": "online", 00:18:04.192 "raid_level": "raid1", 00:18:04.192 "superblock": true, 00:18:04.192 "num_base_bdevs": 2, 00:18:04.192 "num_base_bdevs_discovered": 1, 00:18:04.192 "num_base_bdevs_operational": 1, 00:18:04.192 "base_bdevs_list": [ 00:18:04.192 { 00:18:04.192 "name": null, 00:18:04.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.192 "is_configured": false, 00:18:04.192 "data_offset": 0, 00:18:04.192 "data_size": 7936 00:18:04.192 }, 00:18:04.192 { 00:18:04.192 "name": "BaseBdev2", 00:18:04.192 "uuid": "584bd028-ebdb-5f89-942f-e451b06d519c", 00:18:04.192 "is_configured": true, 00:18:04.192 "data_offset": 256, 00:18:04.192 "data_size": 7936 00:18:04.192 } 00:18:04.192 ] 00:18:04.192 }' 00:18:04.192 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.192 10:29:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.452 10:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:04.452 10:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.452 10:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.452 [2024-11-19 10:29:18.198410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:04.452 [2024-11-19 10:29:18.198595] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:04.452 [2024-11-19 10:29:18.198658] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:04.452 [2024-11-19 10:29:18.198714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:04.452 [2024-11-19 10:29:18.214078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:04.452 10:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.452 10:29:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:04.452 [2024-11-19 10:29:18.215872] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:05.834 "name": "raid_bdev1", 00:18:05.834 "uuid": "4876bd0d-7895-4687-b8cf-8e7b3d850377", 00:18:05.834 "strip_size_kb": 0, 00:18:05.834 "state": "online", 00:18:05.834 "raid_level": "raid1", 00:18:05.834 "superblock": true, 00:18:05.834 "num_base_bdevs": 2, 00:18:05.834 "num_base_bdevs_discovered": 2, 00:18:05.834 "num_base_bdevs_operational": 2, 00:18:05.834 "process": { 00:18:05.834 "type": "rebuild", 00:18:05.834 "target": "spare", 00:18:05.834 "progress": { 00:18:05.834 "blocks": 2560, 00:18:05.834 "percent": 32 00:18:05.834 } 00:18:05.834 }, 00:18:05.834 "base_bdevs_list": [ 00:18:05.834 { 00:18:05.834 "name": "spare", 00:18:05.834 "uuid": "fb3c93a1-fa25-59f2-81ec-49e7b0e6d381", 00:18:05.834 "is_configured": true, 00:18:05.834 "data_offset": 256, 00:18:05.834 "data_size": 7936 00:18:05.834 }, 00:18:05.834 { 00:18:05.834 "name": "BaseBdev2", 00:18:05.834 "uuid": "584bd028-ebdb-5f89-942f-e451b06d519c", 00:18:05.834 "is_configured": true, 00:18:05.834 "data_offset": 256, 00:18:05.834 "data_size": 7936 00:18:05.834 } 00:18:05.834 ] 00:18:05.834 }' 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.834 [2024-11-19 10:29:19.379492] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:05.834 [2024-11-19 10:29:19.420320] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:05.834 [2024-11-19 10:29:19.420442] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.834 [2024-11-19 10:29:19.420478] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:05.834 [2024-11-19 10:29:19.420501] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.834 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.834 "name": "raid_bdev1", 00:18:05.834 "uuid": "4876bd0d-7895-4687-b8cf-8e7b3d850377", 00:18:05.834 "strip_size_kb": 0, 00:18:05.834 "state": "online", 00:18:05.834 "raid_level": "raid1", 00:18:05.834 "superblock": true, 00:18:05.834 "num_base_bdevs": 2, 00:18:05.834 "num_base_bdevs_discovered": 1, 00:18:05.834 "num_base_bdevs_operational": 1, 00:18:05.834 "base_bdevs_list": [ 00:18:05.834 { 00:18:05.834 "name": null, 00:18:05.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.834 "is_configured": false, 00:18:05.834 "data_offset": 0, 00:18:05.834 "data_size": 7936 00:18:05.834 }, 00:18:05.834 { 00:18:05.834 "name": "BaseBdev2", 00:18:05.835 "uuid": "584bd028-ebdb-5f89-942f-e451b06d519c", 00:18:05.835 "is_configured": true, 00:18:05.835 "data_offset": 256, 00:18:05.835 "data_size": 7936 00:18:05.835 } 00:18:05.835 ] 00:18:05.835 }' 00:18:05.835 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.835 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.405 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:06.405 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.405 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.405 [2024-11-19 10:29:19.881242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:06.405 [2024-11-19 10:29:19.881338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:06.405 [2024-11-19 10:29:19.881378] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:06.405 [2024-11-19 10:29:19.881407] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:06.405 [2024-11-19 10:29:19.881604] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:06.405 [2024-11-19 10:29:19.881664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:06.405 [2024-11-19 10:29:19.881740] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:06.405 [2024-11-19 10:29:19.881780] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:06.405 [2024-11-19 10:29:19.881818] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:06.405 [2024-11-19 10:29:19.881907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:06.405 [2024-11-19 10:29:19.896708] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:06.405 spare 00:18:06.405 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.405 10:29:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:06.405 [2024-11-19 10:29:19.898577] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:07.347 10:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:07.347 10:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.347 10:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:07.347 10:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:07.347 10:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.347 10:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.347 10:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.347 10:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.348 10:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.348 10:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.348 10:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.348 "name": "raid_bdev1", 00:18:07.348 "uuid": "4876bd0d-7895-4687-b8cf-8e7b3d850377", 00:18:07.348 "strip_size_kb": 0, 00:18:07.348 "state": "online", 00:18:07.348 "raid_level": "raid1", 00:18:07.348 "superblock": true, 00:18:07.348 "num_base_bdevs": 2, 00:18:07.348 "num_base_bdevs_discovered": 2, 00:18:07.348 "num_base_bdevs_operational": 2, 00:18:07.348 "process": { 00:18:07.348 "type": "rebuild", 00:18:07.348 "target": "spare", 00:18:07.348 "progress": { 00:18:07.348 "blocks": 2560, 00:18:07.348 "percent": 32 00:18:07.348 } 00:18:07.348 }, 00:18:07.348 "base_bdevs_list": [ 00:18:07.348 { 00:18:07.348 "name": "spare", 00:18:07.348 "uuid": "fb3c93a1-fa25-59f2-81ec-49e7b0e6d381", 00:18:07.348 "is_configured": true, 00:18:07.348 "data_offset": 256, 00:18:07.348 "data_size": 7936 00:18:07.348 }, 00:18:07.348 { 00:18:07.348 "name": "BaseBdev2", 00:18:07.348 "uuid": "584bd028-ebdb-5f89-942f-e451b06d519c", 00:18:07.348 "is_configured": true, 00:18:07.348 "data_offset": 256, 00:18:07.348 "data_size": 7936 00:18:07.348 } 00:18:07.348 ] 00:18:07.348 }' 00:18:07.348 10:29:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.348 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:07.348 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.348 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:07.348 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:07.348 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.348 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.348 [2024-11-19 10:29:21.046755] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:07.348 [2024-11-19 10:29:21.103107] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:07.348 [2024-11-19 10:29:21.103156] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.348 [2024-11-19 10:29:21.103170] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:07.348 [2024-11-19 10:29:21.103177] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:07.608 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.608 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:07.608 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:07.608 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:07.608 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:07.608 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:07.608 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:07.608 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:07.608 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:07.608 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:07.608 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:07.608 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.608 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.608 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.608 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.608 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.608 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:07.608 "name": "raid_bdev1", 00:18:07.608 "uuid": "4876bd0d-7895-4687-b8cf-8e7b3d850377", 00:18:07.608 "strip_size_kb": 0, 00:18:07.608 "state": "online", 00:18:07.608 "raid_level": "raid1", 00:18:07.608 "superblock": true, 00:18:07.608 "num_base_bdevs": 2, 00:18:07.608 "num_base_bdevs_discovered": 1, 00:18:07.608 "num_base_bdevs_operational": 1, 00:18:07.608 "base_bdevs_list": [ 00:18:07.608 { 00:18:07.609 "name": null, 00:18:07.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.609 "is_configured": false, 00:18:07.609 "data_offset": 0, 00:18:07.609 "data_size": 7936 00:18:07.609 }, 00:18:07.609 { 00:18:07.609 "name": "BaseBdev2", 00:18:07.609 "uuid": "584bd028-ebdb-5f89-942f-e451b06d519c", 00:18:07.609 "is_configured": true, 00:18:07.609 "data_offset": 256, 00:18:07.609 "data_size": 7936 00:18:07.609 } 00:18:07.609 ] 00:18:07.609 }' 00:18:07.609 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:07.609 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.869 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:07.869 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.869 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:07.869 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:07.869 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.869 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.869 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.869 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.869 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.869 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.869 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.869 "name": "raid_bdev1", 00:18:07.869 "uuid": "4876bd0d-7895-4687-b8cf-8e7b3d850377", 00:18:07.869 "strip_size_kb": 0, 00:18:07.869 "state": "online", 00:18:07.869 "raid_level": "raid1", 00:18:07.869 "superblock": true, 00:18:07.869 "num_base_bdevs": 2, 00:18:07.869 "num_base_bdevs_discovered": 1, 00:18:07.869 "num_base_bdevs_operational": 1, 00:18:07.869 "base_bdevs_list": [ 00:18:07.869 { 00:18:07.869 "name": null, 00:18:07.869 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.869 "is_configured": false, 00:18:07.869 "data_offset": 0, 00:18:07.869 "data_size": 7936 00:18:07.869 }, 00:18:07.869 { 00:18:07.869 "name": "BaseBdev2", 00:18:07.869 "uuid": "584bd028-ebdb-5f89-942f-e451b06d519c", 00:18:07.869 "is_configured": true, 00:18:07.869 "data_offset": 256, 00:18:07.869 "data_size": 7936 00:18:07.869 } 00:18:07.869 ] 00:18:07.869 }' 00:18:07.869 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.130 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:08.130 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.130 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:08.130 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:08.130 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.130 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.130 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.130 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:08.130 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.130 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.130 [2024-11-19 10:29:21.735087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:08.130 [2024-11-19 10:29:21.735136] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:08.130 [2024-11-19 10:29:21.735157] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:08.130 [2024-11-19 10:29:21.735165] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:08.130 [2024-11-19 10:29:21.735306] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:08.130 [2024-11-19 10:29:21.735317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:08.130 [2024-11-19 10:29:21.735371] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:08.130 [2024-11-19 10:29:21.735383] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:08.130 [2024-11-19 10:29:21.735392] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:08.130 [2024-11-19 10:29:21.735402] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:08.130 BaseBdev1 00:18:08.130 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.130 10:29:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:09.072 10:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:09.072 10:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:09.072 10:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:09.072 10:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.072 10:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.072 10:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:09.072 10:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.072 10:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.072 10:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.072 10:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.072 10:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.072 10:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.072 10:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.072 10:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.072 10:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.072 10:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.072 "name": "raid_bdev1", 00:18:09.072 "uuid": "4876bd0d-7895-4687-b8cf-8e7b3d850377", 00:18:09.072 "strip_size_kb": 0, 00:18:09.072 "state": "online", 00:18:09.072 "raid_level": "raid1", 00:18:09.072 "superblock": true, 00:18:09.072 "num_base_bdevs": 2, 00:18:09.072 "num_base_bdevs_discovered": 1, 00:18:09.072 "num_base_bdevs_operational": 1, 00:18:09.072 "base_bdevs_list": [ 00:18:09.072 { 00:18:09.072 "name": null, 00:18:09.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.072 "is_configured": false, 00:18:09.072 "data_offset": 0, 00:18:09.072 "data_size": 7936 00:18:09.072 }, 00:18:09.072 { 00:18:09.072 "name": "BaseBdev2", 00:18:09.072 "uuid": "584bd028-ebdb-5f89-942f-e451b06d519c", 00:18:09.072 "is_configured": true, 00:18:09.072 "data_offset": 256, 00:18:09.072 "data_size": 7936 00:18:09.072 } 00:18:09.072 ] 00:18:09.072 }' 00:18:09.072 10:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.072 10:29:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.643 10:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:09.643 10:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:09.643 10:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:09.643 10:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:09.643 10:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:09.643 10:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.643 10:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.643 10:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.643 10:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.643 10:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.643 10:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:09.643 "name": "raid_bdev1", 00:18:09.643 "uuid": "4876bd0d-7895-4687-b8cf-8e7b3d850377", 00:18:09.643 "strip_size_kb": 0, 00:18:09.643 "state": "online", 00:18:09.643 "raid_level": "raid1", 00:18:09.643 "superblock": true, 00:18:09.643 "num_base_bdevs": 2, 00:18:09.643 "num_base_bdevs_discovered": 1, 00:18:09.643 "num_base_bdevs_operational": 1, 00:18:09.643 "base_bdevs_list": [ 00:18:09.643 { 00:18:09.643 "name": null, 00:18:09.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.643 "is_configured": false, 00:18:09.643 "data_offset": 0, 00:18:09.643 "data_size": 7936 00:18:09.643 }, 00:18:09.643 { 00:18:09.643 "name": "BaseBdev2", 00:18:09.643 "uuid": "584bd028-ebdb-5f89-942f-e451b06d519c", 00:18:09.643 "is_configured": true, 00:18:09.643 "data_offset": 256, 00:18:09.643 "data_size": 7936 00:18:09.643 } 00:18:09.643 ] 00:18:09.643 }' 00:18:09.643 10:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:09.643 10:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:09.643 10:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:09.643 10:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:09.643 10:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:09.643 10:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:09.643 10:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:09.643 10:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:09.643 10:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.643 10:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:09.643 10:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.643 10:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:09.643 10:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.643 10:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.643 [2024-11-19 10:29:23.300402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:09.643 [2024-11-19 10:29:23.300545] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:09.643 [2024-11-19 10:29:23.300563] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:09.643 request: 00:18:09.644 { 00:18:09.644 "base_bdev": "BaseBdev1", 00:18:09.644 "raid_bdev": "raid_bdev1", 00:18:09.644 "method": "bdev_raid_add_base_bdev", 00:18:09.644 "req_id": 1 00:18:09.644 } 00:18:09.644 Got JSON-RPC error response 00:18:09.644 response: 00:18:09.644 { 00:18:09.644 "code": -22, 00:18:09.644 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:09.644 } 00:18:09.644 10:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:09.644 10:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:09.644 10:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:09.644 10:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:09.644 10:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:09.644 10:29:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:10.584 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:10.584 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.584 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.584 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.584 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.584 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:10.584 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.584 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.584 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.584 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.584 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.584 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.584 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.584 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.584 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.844 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.844 "name": "raid_bdev1", 00:18:10.844 "uuid": "4876bd0d-7895-4687-b8cf-8e7b3d850377", 00:18:10.844 "strip_size_kb": 0, 00:18:10.844 "state": "online", 00:18:10.844 "raid_level": "raid1", 00:18:10.844 "superblock": true, 00:18:10.844 "num_base_bdevs": 2, 00:18:10.844 "num_base_bdevs_discovered": 1, 00:18:10.844 "num_base_bdevs_operational": 1, 00:18:10.844 "base_bdevs_list": [ 00:18:10.844 { 00:18:10.844 "name": null, 00:18:10.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.844 "is_configured": false, 00:18:10.844 "data_offset": 0, 00:18:10.844 "data_size": 7936 00:18:10.844 }, 00:18:10.844 { 00:18:10.844 "name": "BaseBdev2", 00:18:10.844 "uuid": "584bd028-ebdb-5f89-942f-e451b06d519c", 00:18:10.844 "is_configured": true, 00:18:10.844 "data_offset": 256, 00:18:10.844 "data_size": 7936 00:18:10.844 } 00:18:10.844 ] 00:18:10.844 }' 00:18:10.844 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.844 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.105 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:11.105 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.105 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:11.105 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:11.105 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.105 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.105 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.105 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.105 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:11.105 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.105 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.105 "name": "raid_bdev1", 00:18:11.105 "uuid": "4876bd0d-7895-4687-b8cf-8e7b3d850377", 00:18:11.105 "strip_size_kb": 0, 00:18:11.105 "state": "online", 00:18:11.105 "raid_level": "raid1", 00:18:11.105 "superblock": true, 00:18:11.105 "num_base_bdevs": 2, 00:18:11.105 "num_base_bdevs_discovered": 1, 00:18:11.105 "num_base_bdevs_operational": 1, 00:18:11.105 "base_bdevs_list": [ 00:18:11.105 { 00:18:11.105 "name": null, 00:18:11.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.105 "is_configured": false, 00:18:11.105 "data_offset": 0, 00:18:11.105 "data_size": 7936 00:18:11.105 }, 00:18:11.105 { 00:18:11.105 "name": "BaseBdev2", 00:18:11.105 "uuid": "584bd028-ebdb-5f89-942f-e451b06d519c", 00:18:11.105 "is_configured": true, 00:18:11.105 "data_offset": 256, 00:18:11.105 "data_size": 7936 00:18:11.105 } 00:18:11.105 ] 00:18:11.105 }' 00:18:11.105 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.105 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:11.105 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.105 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:11.105 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 88698 00:18:11.105 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88698 ']' 00:18:11.105 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88698 00:18:11.105 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:11.105 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:11.105 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88698 00:18:11.375 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:11.375 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:11.375 killing process with pid 88698 00:18:11.375 Received shutdown signal, test time was about 60.000000 seconds 00:18:11.375 00:18:11.375 Latency(us) 00:18:11.375 [2024-11-19T10:29:25.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.375 [2024-11-19T10:29:25.156Z] =================================================================================================================== 00:18:11.375 [2024-11-19T10:29:25.156Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:11.375 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88698' 00:18:11.375 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88698 00:18:11.375 [2024-11-19 10:29:24.902278] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:11.375 [2024-11-19 10:29:24.902387] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:11.375 [2024-11-19 10:29:24.902431] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:11.376 [2024-11-19 10:29:24.902442] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:11.376 10:29:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88698 00:18:11.684 [2024-11-19 10:29:25.177646] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:12.637 10:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:18:12.637 00:18:12.637 real 0m17.519s 00:18:12.637 user 0m23.068s 00:18:12.637 sys 0m1.712s 00:18:12.637 ************************************ 00:18:12.637 END TEST raid_rebuild_test_sb_md_interleaved 00:18:12.637 ************************************ 00:18:12.637 10:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:12.637 10:29:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.637 10:29:26 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:18:12.637 10:29:26 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:18:12.637 10:29:26 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 88698 ']' 00:18:12.637 10:29:26 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 88698 00:18:12.638 10:29:26 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:18:12.638 00:18:12.638 real 11m40.575s 00:18:12.638 user 15m50.296s 00:18:12.638 sys 1m48.128s 00:18:12.638 10:29:26 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:12.638 10:29:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:12.638 ************************************ 00:18:12.638 END TEST bdev_raid 00:18:12.638 ************************************ 00:18:12.638 10:29:26 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:12.638 10:29:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:12.638 10:29:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:12.638 10:29:26 -- common/autotest_common.sh@10 -- # set +x 00:18:12.638 ************************************ 00:18:12.638 START TEST spdkcli_raid 00:18:12.638 ************************************ 00:18:12.638 10:29:26 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:12.898 * Looking for test storage... 00:18:12.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:12.899 10:29:26 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:12.899 10:29:26 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:18:12.899 10:29:26 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:12.899 10:29:26 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:12.899 10:29:26 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:12.899 10:29:26 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:12.899 10:29:26 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:12.899 10:29:26 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:18:12.899 10:29:26 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:18:12.899 10:29:26 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:18:12.899 10:29:26 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:18:12.899 10:29:26 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:18:12.899 10:29:26 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:18:12.899 10:29:26 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:18:12.899 10:29:26 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:12.899 10:29:26 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:18:12.899 10:29:26 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:18:12.899 10:29:26 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:12.899 10:29:26 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:12.899 10:29:26 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:18:12.899 10:29:26 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:18:12.899 10:29:26 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:12.899 10:29:26 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:18:12.899 10:29:26 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:12.899 10:29:26 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:18:12.899 10:29:26 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:18:12.899 10:29:26 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:12.899 10:29:26 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:18:12.899 10:29:26 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:12.899 10:29:26 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:12.899 10:29:26 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:12.899 10:29:26 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:18:12.899 10:29:26 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:12.899 10:29:26 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:12.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.899 --rc genhtml_branch_coverage=1 00:18:12.899 --rc genhtml_function_coverage=1 00:18:12.899 --rc genhtml_legend=1 00:18:12.899 --rc geninfo_all_blocks=1 00:18:12.899 --rc geninfo_unexecuted_blocks=1 00:18:12.899 00:18:12.899 ' 00:18:12.899 10:29:26 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:12.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.899 --rc genhtml_branch_coverage=1 00:18:12.899 --rc genhtml_function_coverage=1 00:18:12.899 --rc genhtml_legend=1 00:18:12.899 --rc geninfo_all_blocks=1 00:18:12.899 --rc geninfo_unexecuted_blocks=1 00:18:12.899 00:18:12.899 ' 00:18:12.899 10:29:26 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:12.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.899 --rc genhtml_branch_coverage=1 00:18:12.899 --rc genhtml_function_coverage=1 00:18:12.899 --rc genhtml_legend=1 00:18:12.899 --rc geninfo_all_blocks=1 00:18:12.899 --rc geninfo_unexecuted_blocks=1 00:18:12.899 00:18:12.899 ' 00:18:12.899 10:29:26 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:12.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:12.899 --rc genhtml_branch_coverage=1 00:18:12.899 --rc genhtml_function_coverage=1 00:18:12.899 --rc genhtml_legend=1 00:18:12.899 --rc geninfo_all_blocks=1 00:18:12.899 --rc geninfo_unexecuted_blocks=1 00:18:12.899 00:18:12.899 ' 00:18:12.899 10:29:26 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:12.899 10:29:26 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:12.899 10:29:26 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:12.899 10:29:26 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:12.899 10:29:26 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:12.899 10:29:26 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:12.899 10:29:26 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:12.899 10:29:26 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:12.899 10:29:26 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:12.899 10:29:26 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:12.899 10:29:26 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:12.899 10:29:26 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:12.899 10:29:26 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:12.899 10:29:26 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:12.899 10:29:26 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:12.899 10:29:26 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:12.899 10:29:26 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:12.899 10:29:26 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:12.899 10:29:26 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:12.899 10:29:26 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:12.899 10:29:26 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:12.899 10:29:26 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:12.899 10:29:26 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:12.899 10:29:26 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:18:12.899 10:29:26 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:18:12.899 10:29:26 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:12.899 10:29:26 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:12.899 10:29:26 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:12.899 10:29:26 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:12.899 10:29:26 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:12.899 10:29:26 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:12.899 10:29:26 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:18:12.899 10:29:26 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:18:12.899 10:29:26 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:12.899 10:29:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:12.899 10:29:26 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:18:12.899 10:29:26 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89380 00:18:12.899 10:29:26 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:18:12.899 10:29:26 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89380 00:18:12.899 10:29:26 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89380 ']' 00:18:12.899 10:29:26 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.899 10:29:26 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:12.899 10:29:26 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.899 10:29:26 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:12.899 10:29:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:13.160 [2024-11-19 10:29:26.731489] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:13.160 [2024-11-19 10:29:26.731593] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89380 ] 00:18:13.160 [2024-11-19 10:29:26.903263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:13.418 [2024-11-19 10:29:27.011416] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.418 [2024-11-19 10:29:27.011450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.353 10:29:27 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.353 10:29:27 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:18:14.353 10:29:27 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:18:14.353 10:29:27 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:14.353 10:29:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:14.353 10:29:27 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:18:14.353 10:29:27 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:14.353 10:29:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:14.353 10:29:27 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:18:14.353 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:18:14.353 ' 00:18:15.729 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:18:15.729 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:18:15.987 10:29:29 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:18:15.987 10:29:29 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:15.987 10:29:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:15.987 10:29:29 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:18:15.987 10:29:29 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:15.987 10:29:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:15.987 10:29:29 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:18:15.987 ' 00:18:16.923 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:18:17.182 10:29:30 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:18:17.182 10:29:30 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:17.182 10:29:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:17.182 10:29:30 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:18:17.182 10:29:30 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:17.182 10:29:30 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:17.182 10:29:30 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:18:17.182 10:29:30 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:18:17.751 10:29:31 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:18:17.751 10:29:31 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:18:17.751 10:29:31 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:18:17.751 10:29:31 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:17.751 10:29:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:17.751 10:29:31 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:18:17.751 10:29:31 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:17.751 10:29:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:17.751 10:29:31 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:18:17.751 ' 00:18:18.686 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:18:18.687 10:29:32 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:18:18.687 10:29:32 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:18.687 10:29:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:18.687 10:29:32 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:18:18.687 10:29:32 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:18.687 10:29:32 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:18.945 10:29:32 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:18:18.945 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:18:18.945 ' 00:18:20.323 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:18:20.323 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:18:20.323 10:29:33 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:18:20.323 10:29:33 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:20.323 10:29:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:20.323 10:29:34 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89380 00:18:20.323 10:29:34 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89380 ']' 00:18:20.323 10:29:34 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89380 00:18:20.323 10:29:34 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:18:20.323 10:29:34 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:20.323 10:29:34 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89380 00:18:20.323 killing process with pid 89380 00:18:20.323 10:29:34 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:20.323 10:29:34 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:20.323 10:29:34 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89380' 00:18:20.323 10:29:34 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89380 00:18:20.323 10:29:34 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89380 00:18:22.858 10:29:36 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:18:22.858 10:29:36 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89380 ']' 00:18:22.858 10:29:36 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89380 00:18:22.858 10:29:36 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89380 ']' 00:18:22.858 10:29:36 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89380 00:18:22.858 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89380) - No such process 00:18:22.858 Process with pid 89380 is not found 00:18:22.858 10:29:36 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89380 is not found' 00:18:22.858 10:29:36 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:18:22.858 10:29:36 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:18:22.858 10:29:36 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:18:22.858 10:29:36 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:18:22.858 00:18:22.858 real 0m9.876s 00:18:22.858 user 0m20.310s 00:18:22.858 sys 0m1.148s 00:18:22.858 10:29:36 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:22.858 10:29:36 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:22.858 ************************************ 00:18:22.858 END TEST spdkcli_raid 00:18:22.858 ************************************ 00:18:22.858 10:29:36 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:22.858 10:29:36 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:22.858 10:29:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:22.858 10:29:36 -- common/autotest_common.sh@10 -- # set +x 00:18:22.858 ************************************ 00:18:22.858 START TEST blockdev_raid5f 00:18:22.858 ************************************ 00:18:22.858 10:29:36 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:22.858 * Looking for test storage... 00:18:22.858 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:22.858 10:29:36 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:22.858 10:29:36 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:18:22.858 10:29:36 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:22.858 10:29:36 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:22.858 10:29:36 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:22.858 10:29:36 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:22.858 10:29:36 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:22.858 10:29:36 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:18:22.858 10:29:36 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:18:22.858 10:29:36 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:18:22.858 10:29:36 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:18:22.858 10:29:36 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:18:22.858 10:29:36 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:18:22.858 10:29:36 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:18:22.858 10:29:36 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:22.858 10:29:36 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:18:22.858 10:29:36 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:18:22.858 10:29:36 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:22.858 10:29:36 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:22.858 10:29:36 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:18:22.858 10:29:36 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:18:22.858 10:29:36 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:22.858 10:29:36 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:18:22.858 10:29:36 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:18:22.858 10:29:36 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:18:22.858 10:29:36 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:18:22.858 10:29:36 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:22.858 10:29:36 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:18:22.858 10:29:36 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:18:22.858 10:29:36 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:22.858 10:29:36 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:22.858 10:29:36 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:18:22.858 10:29:36 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:22.858 10:29:36 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:22.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.858 --rc genhtml_branch_coverage=1 00:18:22.858 --rc genhtml_function_coverage=1 00:18:22.858 --rc genhtml_legend=1 00:18:22.858 --rc geninfo_all_blocks=1 00:18:22.858 --rc geninfo_unexecuted_blocks=1 00:18:22.858 00:18:22.858 ' 00:18:22.858 10:29:36 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:22.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.858 --rc genhtml_branch_coverage=1 00:18:22.858 --rc genhtml_function_coverage=1 00:18:22.858 --rc genhtml_legend=1 00:18:22.858 --rc geninfo_all_blocks=1 00:18:22.858 --rc geninfo_unexecuted_blocks=1 00:18:22.858 00:18:22.858 ' 00:18:22.858 10:29:36 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:22.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.858 --rc genhtml_branch_coverage=1 00:18:22.858 --rc genhtml_function_coverage=1 00:18:22.858 --rc genhtml_legend=1 00:18:22.858 --rc geninfo_all_blocks=1 00:18:22.858 --rc geninfo_unexecuted_blocks=1 00:18:22.858 00:18:22.858 ' 00:18:22.858 10:29:36 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:22.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.858 --rc genhtml_branch_coverage=1 00:18:22.858 --rc genhtml_function_coverage=1 00:18:22.858 --rc genhtml_legend=1 00:18:22.858 --rc geninfo_all_blocks=1 00:18:22.858 --rc geninfo_unexecuted_blocks=1 00:18:22.858 00:18:22.858 ' 00:18:22.858 10:29:36 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:22.858 10:29:36 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:18:22.858 10:29:36 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:22.858 10:29:36 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:22.858 10:29:36 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:22.858 10:29:36 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:22.858 10:29:36 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:22.858 10:29:36 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:22.858 10:29:36 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:18:22.858 10:29:36 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:18:22.858 10:29:36 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:18:22.858 10:29:36 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:18:22.858 10:29:36 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:18:22.858 10:29:36 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:18:22.858 10:29:36 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:18:22.858 10:29:36 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:18:22.858 10:29:36 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:18:22.858 10:29:36 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:18:22.859 10:29:36 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:18:22.859 10:29:36 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:18:22.859 10:29:36 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:18:22.859 10:29:36 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:18:22.859 10:29:36 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:18:22.859 10:29:36 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:18:22.859 10:29:36 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=89650 00:18:22.859 10:29:36 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:22.859 10:29:36 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:22.859 10:29:36 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 89650 00:18:22.859 10:29:36 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 89650 ']' 00:18:22.859 10:29:36 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.859 10:29:36 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:22.859 10:29:36 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.859 10:29:36 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:22.859 10:29:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:23.118 [2024-11-19 10:29:36.654955] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:23.119 [2024-11-19 10:29:36.655190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89650 ] 00:18:23.119 [2024-11-19 10:29:36.825383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.378 [2024-11-19 10:29:36.928503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.948 10:29:37 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:23.948 10:29:37 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:18:23.948 10:29:37 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:18:23.948 10:29:37 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:18:23.948 10:29:37 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:18:23.948 10:29:37 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.948 10:29:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:24.208 Malloc0 00:18:24.208 Malloc1 00:18:24.208 Malloc2 00:18:24.208 10:29:37 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.208 10:29:37 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:18:24.208 10:29:37 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.208 10:29:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:24.208 10:29:37 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.208 10:29:37 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:18:24.208 10:29:37 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:18:24.208 10:29:37 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.208 10:29:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:24.208 10:29:37 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.208 10:29:37 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:18:24.208 10:29:37 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.208 10:29:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:24.208 10:29:37 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.208 10:29:37 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:24.208 10:29:37 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.208 10:29:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:24.208 10:29:37 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.208 10:29:37 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:18:24.208 10:29:37 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:18:24.208 10:29:37 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:18:24.208 10:29:37 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.208 10:29:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:24.468 10:29:37 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.468 10:29:37 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:18:24.468 10:29:37 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:18:24.468 10:29:37 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "fa55ef5f-71a4-488b-91f8-1ed1f867e231"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "fa55ef5f-71a4-488b-91f8-1ed1f867e231",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "fa55ef5f-71a4-488b-91f8-1ed1f867e231",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "4a3020e4-3113-4cc6-aaa5-5f598e23ffc4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "eb61339b-5f39-47aa-9ed7-bcde279d3422",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "cfbb9740-9d89-4db8-a812-db7084bf68ad",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:24.468 10:29:38 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:18:24.468 10:29:38 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:18:24.468 10:29:38 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:18:24.468 10:29:38 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 89650 00:18:24.468 10:29:38 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 89650 ']' 00:18:24.468 10:29:38 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 89650 00:18:24.468 10:29:38 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:18:24.468 10:29:38 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.468 10:29:38 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89650 00:18:24.468 10:29:38 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:24.468 10:29:38 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:24.468 10:29:38 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89650' 00:18:24.468 killing process with pid 89650 00:18:24.468 10:29:38 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 89650 00:18:24.468 10:29:38 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 89650 00:18:27.007 10:29:40 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:27.007 10:29:40 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:27.007 10:29:40 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:27.007 10:29:40 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:27.007 10:29:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:27.007 ************************************ 00:18:27.007 START TEST bdev_hello_world 00:18:27.007 ************************************ 00:18:27.007 10:29:40 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:27.007 [2024-11-19 10:29:40.588739] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:27.007 [2024-11-19 10:29:40.588844] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89717 ] 00:18:27.007 [2024-11-19 10:29:40.762412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.267 [2024-11-19 10:29:40.867928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.837 [2024-11-19 10:29:41.354773] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:27.837 [2024-11-19 10:29:41.354889] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:18:27.837 [2024-11-19 10:29:41.354910] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:27.837 [2024-11-19 10:29:41.355414] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:27.837 [2024-11-19 10:29:41.355527] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:27.837 [2024-11-19 10:29:41.355543] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:27.837 [2024-11-19 10:29:41.355586] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:27.837 00:18:27.837 [2024-11-19 10:29:41.355601] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:29.220 ************************************ 00:18:29.220 END TEST bdev_hello_world 00:18:29.220 ************************************ 00:18:29.220 00:18:29.220 real 0m2.133s 00:18:29.220 user 0m1.767s 00:18:29.220 sys 0m0.243s 00:18:29.220 10:29:42 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:29.220 10:29:42 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:29.220 10:29:42 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:18:29.220 10:29:42 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:29.220 10:29:42 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:29.220 10:29:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:29.220 ************************************ 00:18:29.220 START TEST bdev_bounds 00:18:29.220 ************************************ 00:18:29.220 10:29:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:18:29.220 10:29:42 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=89759 00:18:29.220 10:29:42 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:29.220 10:29:42 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:29.220 10:29:42 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 89759' 00:18:29.220 Process bdevio pid: 89759 00:18:29.220 10:29:42 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 89759 00:18:29.220 10:29:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 89759 ']' 00:18:29.220 10:29:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.220 10:29:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.220 10:29:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.220 10:29:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.220 10:29:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:29.220 [2024-11-19 10:29:42.806946] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:29.220 [2024-11-19 10:29:42.807090] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89759 ] 00:18:29.220 [2024-11-19 10:29:42.986276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:29.480 [2024-11-19 10:29:43.095263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:29.480 [2024-11-19 10:29:43.095507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:29.480 [2024-11-19 10:29:43.095522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.047 10:29:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:30.047 10:29:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:18:30.047 10:29:43 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:30.047 I/O targets: 00:18:30.047 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:18:30.047 00:18:30.047 00:18:30.047 CUnit - A unit testing framework for C - Version 2.1-3 00:18:30.047 http://cunit.sourceforge.net/ 00:18:30.047 00:18:30.047 00:18:30.047 Suite: bdevio tests on: raid5f 00:18:30.047 Test: blockdev write read block ...passed 00:18:30.047 Test: blockdev write zeroes read block ...passed 00:18:30.047 Test: blockdev write zeroes read no split ...passed 00:18:30.047 Test: blockdev write zeroes read split ...passed 00:18:30.306 Test: blockdev write zeroes read split partial ...passed 00:18:30.306 Test: blockdev reset ...passed 00:18:30.306 Test: blockdev write read 8 blocks ...passed 00:18:30.306 Test: blockdev write read size > 128k ...passed 00:18:30.306 Test: blockdev write read invalid size ...passed 00:18:30.306 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:30.306 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:30.306 Test: blockdev write read max offset ...passed 00:18:30.306 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:30.306 Test: blockdev writev readv 8 blocks ...passed 00:18:30.306 Test: blockdev writev readv 30 x 1block ...passed 00:18:30.306 Test: blockdev writev readv block ...passed 00:18:30.306 Test: blockdev writev readv size > 128k ...passed 00:18:30.306 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:30.306 Test: blockdev comparev and writev ...passed 00:18:30.306 Test: blockdev nvme passthru rw ...passed 00:18:30.306 Test: blockdev nvme passthru vendor specific ...passed 00:18:30.306 Test: blockdev nvme admin passthru ...passed 00:18:30.306 Test: blockdev copy ...passed 00:18:30.306 00:18:30.306 Run Summary: Type Total Ran Passed Failed Inactive 00:18:30.306 suites 1 1 n/a 0 0 00:18:30.306 tests 23 23 23 0 0 00:18:30.306 asserts 130 130 130 0 n/a 00:18:30.306 00:18:30.306 Elapsed time = 0.580 seconds 00:18:30.306 0 00:18:30.306 10:29:43 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 89759 00:18:30.306 10:29:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 89759 ']' 00:18:30.306 10:29:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 89759 00:18:30.306 10:29:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:18:30.306 10:29:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.306 10:29:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89759 00:18:30.306 10:29:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:30.306 10:29:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:30.306 10:29:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89759' 00:18:30.306 killing process with pid 89759 00:18:30.306 10:29:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 89759 00:18:30.306 10:29:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 89759 00:18:31.684 10:29:45 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:18:31.684 00:18:31.684 real 0m2.576s 00:18:31.684 user 0m6.311s 00:18:31.684 sys 0m0.362s 00:18:31.684 10:29:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:31.684 10:29:45 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:31.684 ************************************ 00:18:31.684 END TEST bdev_bounds 00:18:31.684 ************************************ 00:18:31.684 10:29:45 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:31.684 10:29:45 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:31.684 10:29:45 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:31.684 10:29:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:31.685 ************************************ 00:18:31.685 START TEST bdev_nbd 00:18:31.685 ************************************ 00:18:31.685 10:29:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:31.685 10:29:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:18:31.685 10:29:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:18:31.685 10:29:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:31.685 10:29:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:31.685 10:29:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:18:31.685 10:29:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:18:31.685 10:29:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:18:31.685 10:29:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:18:31.685 10:29:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:18:31.685 10:29:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:18:31.685 10:29:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:18:31.685 10:29:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:18:31.685 10:29:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:18:31.685 10:29:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:18:31.685 10:29:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:18:31.685 10:29:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=89819 00:18:31.685 10:29:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:18:31.685 10:29:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:31.685 10:29:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 89819 /var/tmp/spdk-nbd.sock 00:18:31.685 10:29:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 89819 ']' 00:18:31.685 10:29:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:31.685 10:29:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.685 10:29:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:31.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:31.685 10:29:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.685 10:29:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:31.951 [2024-11-19 10:29:45.475477] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:31.951 [2024-11-19 10:29:45.475713] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.951 [2024-11-19 10:29:45.652679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.239 [2024-11-19 10:29:45.761902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.514 10:29:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.514 10:29:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:18:32.514 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:18:32.514 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:32.514 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:18:32.514 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:18:32.514 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:18:32.514 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:32.514 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:18:32.514 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:18:32.514 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:18:32.514 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:18:32.514 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:18:32.514 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:32.514 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:18:32.774 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:18:32.774 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:18:32.774 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:18:32.774 10:29:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:32.774 10:29:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:32.774 10:29:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:32.774 10:29:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:32.774 10:29:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:32.774 10:29:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:32.774 10:29:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:32.774 10:29:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:32.774 10:29:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:32.774 1+0 records in 00:18:32.774 1+0 records out 00:18:32.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412946 s, 9.9 MB/s 00:18:32.774 10:29:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.774 10:29:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:32.774 10:29:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.774 10:29:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:32.774 10:29:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:32.774 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:32.774 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:32.774 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:33.034 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:18:33.034 { 00:18:33.034 "nbd_device": "/dev/nbd0", 00:18:33.034 "bdev_name": "raid5f" 00:18:33.034 } 00:18:33.034 ]' 00:18:33.034 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:18:33.034 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:18:33.034 { 00:18:33.034 "nbd_device": "/dev/nbd0", 00:18:33.034 "bdev_name": "raid5f" 00:18:33.034 } 00:18:33.034 ]' 00:18:33.034 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:18:33.034 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:33.034 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:33.034 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:33.034 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:33.034 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:33.034 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:33.034 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:33.295 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:33.295 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:33.295 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:33.295 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:33.295 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:33.295 10:29:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:33.295 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:33.295 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:33.295 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:33.295 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:33.295 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:33.555 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:33.555 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:33.556 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:33.556 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:33.556 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:33.556 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:33.556 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:33.556 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:33.556 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:33.556 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:18:33.556 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:18:33.556 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:18:33.556 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:33.556 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:33.556 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:18:33.556 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:33.556 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:18:33.556 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:33.556 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:33.556 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:33.556 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:18:33.556 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:33.556 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:33.556 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:33.556 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:18:33.556 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:33.556 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:33.556 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:18:33.816 /dev/nbd0 00:18:33.816 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:33.816 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:33.816 10:29:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:33.816 10:29:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:33.816 10:29:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:33.816 10:29:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:33.816 10:29:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:33.816 10:29:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:33.816 10:29:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:33.816 10:29:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:33.816 10:29:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:33.816 1+0 records in 00:18:33.816 1+0 records out 00:18:33.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000547395 s, 7.5 MB/s 00:18:33.816 10:29:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:33.816 10:29:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:33.816 10:29:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:33.816 10:29:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:33.816 10:29:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:33.816 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:33.816 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:33.816 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:33.816 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:33.816 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:34.077 { 00:18:34.077 "nbd_device": "/dev/nbd0", 00:18:34.077 "bdev_name": "raid5f" 00:18:34.077 } 00:18:34.077 ]' 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:34.077 { 00:18:34.077 "nbd_device": "/dev/nbd0", 00:18:34.077 "bdev_name": "raid5f" 00:18:34.077 } 00:18:34.077 ]' 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:18:34.077 256+0 records in 00:18:34.077 256+0 records out 00:18:34.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00577951 s, 181 MB/s 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:34.077 256+0 records in 00:18:34.077 256+0 records out 00:18:34.077 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0278626 s, 37.6 MB/s 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:34.077 10:29:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:34.337 10:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:34.337 10:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:34.337 10:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:34.337 10:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:34.337 10:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:34.337 10:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:34.337 10:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:34.337 10:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:34.337 10:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:34.337 10:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:34.337 10:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:34.598 10:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:34.598 10:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:34.598 10:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:34.598 10:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:34.598 10:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:34.598 10:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:34.598 10:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:34.598 10:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:34.598 10:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:34.598 10:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:18:34.598 10:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:34.598 10:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:18:34.598 10:29:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:34.598 10:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:34.598 10:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:18:34.598 10:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:18:34.858 malloc_lvol_verify 00:18:34.858 10:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:18:35.118 5c214f63-60d2-46fd-916a-74b8a6ffb049 00:18:35.118 10:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:18:35.118 157fabd3-ea09-472a-8fb0-5292855f3ff0 00:18:35.118 10:29:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:18:35.378 /dev/nbd0 00:18:35.378 10:29:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:18:35.378 10:29:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:18:35.378 10:29:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:18:35.378 10:29:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:18:35.378 10:29:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:18:35.378 mke2fs 1.47.0 (5-Feb-2023) 00:18:35.378 Discarding device blocks: 0/4096 done 00:18:35.378 Creating filesystem with 4096 1k blocks and 1024 inodes 00:18:35.378 00:18:35.378 Allocating group tables: 0/1 done 00:18:35.378 Writing inode tables: 0/1 done 00:18:35.378 Creating journal (1024 blocks): done 00:18:35.378 Writing superblocks and filesystem accounting information: 0/1 done 00:18:35.378 00:18:35.378 10:29:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:35.378 10:29:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:35.378 10:29:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:35.378 10:29:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:35.378 10:29:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:35.378 10:29:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:35.378 10:29:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:35.643 10:29:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:35.643 10:29:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:35.643 10:29:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:35.643 10:29:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:35.643 10:29:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:35.644 10:29:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:35.644 10:29:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:35.644 10:29:49 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:35.644 10:29:49 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 89819 00:18:35.644 10:29:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 89819 ']' 00:18:35.644 10:29:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 89819 00:18:35.644 10:29:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:18:35.644 10:29:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:35.644 10:29:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89819 00:18:35.644 10:29:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:35.644 10:29:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:35.644 10:29:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89819' 00:18:35.644 killing process with pid 89819 00:18:35.644 10:29:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 89819 00:18:35.644 10:29:49 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 89819 00:18:37.031 ************************************ 00:18:37.031 END TEST bdev_nbd 00:18:37.031 ************************************ 00:18:37.031 10:29:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:18:37.031 00:18:37.031 real 0m5.366s 00:18:37.031 user 0m7.240s 00:18:37.031 sys 0m1.297s 00:18:37.031 10:29:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:37.031 10:29:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:37.031 10:29:50 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:18:37.031 10:29:50 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:18:37.031 10:29:50 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:18:37.031 10:29:50 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:18:37.031 10:29:50 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:37.031 10:29:50 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:37.031 10:29:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:37.031 ************************************ 00:18:37.031 START TEST bdev_fio 00:18:37.031 ************************************ 00:18:37.031 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:18:37.031 10:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:18:37.031 10:29:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:18:37.031 10:29:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:18:37.031 10:29:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:18:37.031 10:29:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:18:37.031 10:29:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:37.292 ************************************ 00:18:37.292 START TEST bdev_fio_rw_verify 00:18:37.292 ************************************ 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:18:37.292 10:29:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:37.292 10:29:51 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:37.292 10:29:51 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:37.292 10:29:51 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:18:37.292 10:29:51 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:37.292 10:29:51 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:37.553 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:37.553 fio-3.35 00:18:37.553 Starting 1 thread 00:18:49.778 00:18:49.778 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90018: Tue Nov 19 10:30:02 2024 00:18:49.778 read: IOPS=12.3k, BW=48.2MiB/s (50.5MB/s)(482MiB/10001msec) 00:18:49.778 slat (nsec): min=16747, max=59512, avg=19054.80, stdev=2132.39 00:18:49.778 clat (usec): min=10, max=298, avg=129.73, stdev=45.13 00:18:49.778 lat (usec): min=29, max=318, avg=148.78, stdev=45.35 00:18:49.778 clat percentiles (usec): 00:18:49.778 | 50.000th=[ 133], 99.000th=[ 215], 99.900th=[ 233], 99.990th=[ 262], 00:18:49.778 | 99.999th=[ 281] 00:18:49.778 write: IOPS=12.9k, BW=50.4MiB/s (52.8MB/s)(498MiB/9875msec); 0 zone resets 00:18:49.778 slat (usec): min=7, max=239, avg=16.27, stdev= 3.68 00:18:49.778 clat (usec): min=56, max=1349, avg=300.35, stdev=40.60 00:18:49.778 lat (usec): min=71, max=1589, avg=316.62, stdev=41.62 00:18:49.778 clat percentiles (usec): 00:18:49.778 | 50.000th=[ 302], 99.000th=[ 375], 99.900th=[ 578], 99.990th=[ 1123], 00:18:49.778 | 99.999th=[ 1287] 00:18:49.778 bw ( KiB/s): min=48328, max=54024, per=98.81%, avg=50998.32, stdev=1375.89, samples=19 00:18:49.778 iops : min=12082, max=13506, avg=12749.58, stdev=343.97, samples=19 00:18:49.778 lat (usec) : 20=0.01%, 50=0.01%, 100=15.28%, 250=39.20%, 500=45.45% 00:18:49.778 lat (usec) : 750=0.05%, 1000=0.02% 00:18:49.778 lat (msec) : 2=0.01% 00:18:49.778 cpu : usr=98.92%, sys=0.42%, ctx=19, majf=0, minf=10086 00:18:49.778 IO depths : 1=7.6%, 2=19.7%, 4=55.3%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:49.778 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.778 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:49.778 issued rwts: total=123403,127414,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:49.778 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:49.778 00:18:49.778 Run status group 0 (all jobs): 00:18:49.778 READ: bw=48.2MiB/s (50.5MB/s), 48.2MiB/s-48.2MiB/s (50.5MB/s-50.5MB/s), io=482MiB (505MB), run=10001-10001msec 00:18:49.778 WRITE: bw=50.4MiB/s (52.8MB/s), 50.4MiB/s-50.4MiB/s (52.8MB/s-52.8MB/s), io=498MiB (522MB), run=9875-9875msec 00:18:50.039 ----------------------------------------------------- 00:18:50.039 Suppressions used: 00:18:50.039 count bytes template 00:18:50.039 1 7 /usr/src/fio/parse.c 00:18:50.039 193 18528 /usr/src/fio/iolog.c 00:18:50.039 1 8 libtcmalloc_minimal.so 00:18:50.039 1 904 libcrypto.so 00:18:50.039 ----------------------------------------------------- 00:18:50.039 00:18:50.039 ************************************ 00:18:50.039 END TEST bdev_fio_rw_verify 00:18:50.039 ************************************ 00:18:50.039 00:18:50.039 real 0m12.668s 00:18:50.039 user 0m12.943s 00:18:50.039 sys 0m0.710s 00:18:50.039 10:30:03 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:50.039 10:30:03 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:18:50.039 10:30:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:18:50.039 10:30:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:50.039 10:30:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:18:50.039 10:30:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:50.039 10:30:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:18:50.039 10:30:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:18:50.039 10:30:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:50.039 10:30:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:50.039 10:30:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:50.039 10:30:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:18:50.039 10:30:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:50.039 10:30:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:50.039 10:30:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:50.039 10:30:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:18:50.039 10:30:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:18:50.039 10:30:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:18:50.039 10:30:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:18:50.039 10:30:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "fa55ef5f-71a4-488b-91f8-1ed1f867e231"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "fa55ef5f-71a4-488b-91f8-1ed1f867e231",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "fa55ef5f-71a4-488b-91f8-1ed1f867e231",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "4a3020e4-3113-4cc6-aaa5-5f598e23ffc4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "eb61339b-5f39-47aa-9ed7-bcde279d3422",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "cfbb9740-9d89-4db8-a812-db7084bf68ad",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:50.039 10:30:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:18:50.039 10:30:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:50.039 /home/vagrant/spdk_repo/spdk 00:18:50.039 10:30:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:18:50.039 10:30:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:18:50.039 10:30:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:18:50.039 00:18:50.039 real 0m12.971s 00:18:50.039 user 0m13.064s 00:18:50.039 sys 0m0.859s 00:18:50.039 10:30:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:50.039 10:30:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:50.039 ************************************ 00:18:50.039 END TEST bdev_fio 00:18:50.039 ************************************ 00:18:50.301 10:30:03 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:50.301 10:30:03 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:50.301 10:30:03 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:18:50.301 10:30:03 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:50.301 10:30:03 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:50.301 ************************************ 00:18:50.301 START TEST bdev_verify 00:18:50.301 ************************************ 00:18:50.301 10:30:03 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:50.301 [2024-11-19 10:30:03.942449] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:50.301 [2024-11-19 10:30:03.942618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90182 ] 00:18:50.561 [2024-11-19 10:30:04.117175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:50.561 [2024-11-19 10:30:04.224855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.561 [2024-11-19 10:30:04.224894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.130 Running I/O for 5 seconds... 00:18:53.010 10731.00 IOPS, 41.92 MiB/s [2024-11-19T10:30:08.172Z] 10894.50 IOPS, 42.56 MiB/s [2024-11-19T10:30:09.112Z] 10881.67 IOPS, 42.51 MiB/s [2024-11-19T10:30:10.066Z] 10901.50 IOPS, 42.58 MiB/s [2024-11-19T10:30:10.066Z] 10895.40 IOPS, 42.56 MiB/s 00:18:56.285 Latency(us) 00:18:56.285 [2024-11-19T10:30:10.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.285 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:56.285 Verification LBA range: start 0x0 length 0x2000 00:18:56.285 raid5f : 5.02 4416.95 17.25 0.00 0.00 43787.28 273.66 30678.86 00:18:56.285 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:56.285 Verification LBA range: start 0x2000 length 0x2000 00:18:56.285 raid5f : 5.01 6470.83 25.28 0.00 0.00 29795.97 159.19 22665.73 00:18:56.285 [2024-11-19T10:30:10.066Z] =================================================================================================================== 00:18:56.285 [2024-11-19T10:30:10.066Z] Total : 10887.78 42.53 0.00 0.00 35474.23 159.19 30678.86 00:18:57.668 00:18:57.668 real 0m7.217s 00:18:57.668 user 0m13.352s 00:18:57.668 sys 0m0.273s 00:18:57.668 ************************************ 00:18:57.668 END TEST bdev_verify 00:18:57.668 ************************************ 00:18:57.668 10:30:11 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:57.668 10:30:11 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:18:57.668 10:30:11 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:57.668 10:30:11 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:18:57.668 10:30:11 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:57.668 10:30:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:57.668 ************************************ 00:18:57.668 START TEST bdev_verify_big_io 00:18:57.668 ************************************ 00:18:57.668 10:30:11 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:57.668 [2024-11-19 10:30:11.229081] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:18:57.668 [2024-11-19 10:30:11.229182] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90279 ] 00:18:57.668 [2024-11-19 10:30:11.401163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:57.928 [2024-11-19 10:30:11.503902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.928 [2024-11-19 10:30:11.503929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.497 Running I/O for 5 seconds... 00:19:00.418 633.00 IOPS, 39.56 MiB/s [2024-11-19T10:30:15.140Z] 760.00 IOPS, 47.50 MiB/s [2024-11-19T10:30:16.080Z] 761.33 IOPS, 47.58 MiB/s [2024-11-19T10:30:17.463Z] 792.75 IOPS, 49.55 MiB/s [2024-11-19T10:30:17.463Z] 786.40 IOPS, 49.15 MiB/s 00:19:03.682 Latency(us) 00:19:03.682 [2024-11-19T10:30:17.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.682 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:03.682 Verification LBA range: start 0x0 length 0x200 00:19:03.682 raid5f : 5.34 356.25 22.27 0.00 0.00 8896342.58 218.21 388293.65 00:19:03.682 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:03.682 Verification LBA range: start 0x200 length 0x200 00:19:03.682 raid5f : 5.25 447.66 27.98 0.00 0.00 7080669.41 220.90 307704.40 00:19:03.682 [2024-11-19T10:30:17.463Z] =================================================================================================================== 00:19:03.682 [2024-11-19T10:30:17.463Z] Total : 803.91 50.24 0.00 0.00 7893472.53 218.21 388293.65 00:19:05.065 00:19:05.065 real 0m7.511s 00:19:05.065 user 0m13.954s 00:19:05.065 sys 0m0.271s 00:19:05.065 10:30:18 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.065 10:30:18 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:05.065 ************************************ 00:19:05.065 END TEST bdev_verify_big_io 00:19:05.065 ************************************ 00:19:05.065 10:30:18 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:05.065 10:30:18 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:05.065 10:30:18 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.065 10:30:18 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:05.065 ************************************ 00:19:05.065 START TEST bdev_write_zeroes 00:19:05.065 ************************************ 00:19:05.065 10:30:18 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:05.065 [2024-11-19 10:30:18.830667] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:05.065 [2024-11-19 10:30:18.830878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90373 ] 00:19:05.324 [2024-11-19 10:30:19.011736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.583 [2024-11-19 10:30:19.121445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.843 Running I/O for 1 seconds... 00:19:07.226 30495.00 IOPS, 119.12 MiB/s 00:19:07.226 Latency(us) 00:19:07.226 [2024-11-19T10:30:21.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.226 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:07.226 raid5f : 1.01 30465.16 119.00 0.00 0.00 4190.53 1230.59 5780.90 00:19:07.226 [2024-11-19T10:30:21.007Z] =================================================================================================================== 00:19:07.226 [2024-11-19T10:30:21.007Z] Total : 30465.16 119.00 0.00 0.00 4190.53 1230.59 5780.90 00:19:08.166 00:19:08.166 real 0m3.165s 00:19:08.166 user 0m2.766s 00:19:08.166 sys 0m0.271s 00:19:08.166 10:30:21 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:08.166 10:30:21 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:08.166 ************************************ 00:19:08.166 END TEST bdev_write_zeroes 00:19:08.166 ************************************ 00:19:08.426 10:30:21 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:08.426 10:30:21 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:08.426 10:30:21 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:08.426 10:30:21 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:08.426 ************************************ 00:19:08.426 START TEST bdev_json_nonenclosed 00:19:08.426 ************************************ 00:19:08.426 10:30:21 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:08.426 [2024-11-19 10:30:22.070390] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:08.426 [2024-11-19 10:30:22.070509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90426 ] 00:19:08.687 [2024-11-19 10:30:22.249275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.687 [2024-11-19 10:30:22.353454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.687 [2024-11-19 10:30:22.353623] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:08.687 [2024-11-19 10:30:22.353653] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:08.687 [2024-11-19 10:30:22.353663] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:08.946 00:19:08.946 real 0m0.613s 00:19:08.946 user 0m0.373s 00:19:08.946 sys 0m0.135s 00:19:08.946 10:30:22 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:08.946 10:30:22 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:08.946 ************************************ 00:19:08.946 END TEST bdev_json_nonenclosed 00:19:08.946 ************************************ 00:19:08.947 10:30:22 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:08.947 10:30:22 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:08.947 10:30:22 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:08.947 10:30:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:08.947 ************************************ 00:19:08.947 START TEST bdev_json_nonarray 00:19:08.947 ************************************ 00:19:08.947 10:30:22 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:09.207 [2024-11-19 10:30:22.752313] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 24.03.0 initialization... 00:19:09.207 [2024-11-19 10:30:22.752430] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90456 ] 00:19:09.207 [2024-11-19 10:30:22.932615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.466 [2024-11-19 10:30:23.037403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.466 [2024-11-19 10:30:23.037584] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:09.466 [2024-11-19 10:30:23.037605] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:09.466 [2024-11-19 10:30:23.037624] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:09.726 00:19:09.726 real 0m0.611s 00:19:09.726 user 0m0.376s 00:19:09.726 sys 0m0.130s 00:19:09.726 ************************************ 00:19:09.726 END TEST bdev_json_nonarray 00:19:09.726 ************************************ 00:19:09.726 10:30:23 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:09.726 10:30:23 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:09.726 10:30:23 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:19:09.726 10:30:23 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:19:09.726 10:30:23 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:19:09.726 10:30:23 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:19:09.726 10:30:23 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:19:09.726 10:30:23 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:09.726 10:30:23 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:09.726 10:30:23 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:19:09.726 10:30:23 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:19:09.726 10:30:23 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:19:09.726 10:30:23 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:19:09.726 00:19:09.726 real 0m47.041s 00:19:09.726 user 1m3.434s 00:19:09.726 sys 0m4.943s 00:19:09.726 ************************************ 00:19:09.726 END TEST blockdev_raid5f 00:19:09.726 ************************************ 00:19:09.726 10:30:23 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:09.726 10:30:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:09.726 10:30:23 -- spdk/autotest.sh@194 -- # uname -s 00:19:09.726 10:30:23 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:19:09.726 10:30:23 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:09.726 10:30:23 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:09.726 10:30:23 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:19:09.726 10:30:23 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:09.726 10:30:23 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:09.726 10:30:23 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:09.726 10:30:23 -- common/autotest_common.sh@10 -- # set +x 00:19:09.726 10:30:23 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:09.726 10:30:23 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:09.726 10:30:23 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:19:09.726 10:30:23 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:09.726 10:30:23 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:09.726 10:30:23 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:09.726 10:30:23 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:09.726 10:30:23 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:09.726 10:30:23 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:09.726 10:30:23 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:09.726 10:30:23 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:09.726 10:30:23 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:19:09.726 10:30:23 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:19:09.726 10:30:23 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:19:09.726 10:30:23 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:19:09.726 10:30:23 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:19:09.726 10:30:23 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:19:09.726 10:30:23 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:19:09.726 10:30:23 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:19:09.726 10:30:23 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:19:09.726 10:30:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:09.726 10:30:23 -- common/autotest_common.sh@10 -- # set +x 00:19:09.726 10:30:23 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:19:09.726 10:30:23 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:19:09.726 10:30:23 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:19:09.726 10:30:23 -- common/autotest_common.sh@10 -- # set +x 00:19:12.269 INFO: APP EXITING 00:19:12.269 INFO: killing all VMs 00:19:12.269 INFO: killing vhost app 00:19:12.269 INFO: EXIT DONE 00:19:12.529 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:12.529 Waiting for block devices as requested 00:19:12.789 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:12.789 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:13.730 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:13.730 Cleaning 00:19:13.730 Removing: /var/run/dpdk/spdk0/config 00:19:13.730 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:13.730 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:13.730 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:13.730 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:13.730 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:13.730 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:13.730 Removing: /dev/shm/spdk_tgt_trace.pid56861 00:19:13.730 Removing: /var/run/dpdk/spdk0 00:19:13.730 Removing: /var/run/dpdk/spdk_pid56623 00:19:13.730 Removing: /var/run/dpdk/spdk_pid56861 00:19:13.730 Removing: /var/run/dpdk/spdk_pid57090 00:19:13.730 Removing: /var/run/dpdk/spdk_pid57194 00:19:13.730 Removing: /var/run/dpdk/spdk_pid57250 00:19:13.730 Removing: /var/run/dpdk/spdk_pid57378 00:19:13.730 Removing: /var/run/dpdk/spdk_pid57402 00:19:13.730 Removing: /var/run/dpdk/spdk_pid57606 00:19:13.730 Removing: /var/run/dpdk/spdk_pid57722 00:19:13.990 Removing: /var/run/dpdk/spdk_pid57825 00:19:13.990 Removing: /var/run/dpdk/spdk_pid57947 00:19:13.990 Removing: /var/run/dpdk/spdk_pid58055 00:19:13.990 Removing: /var/run/dpdk/spdk_pid58093 00:19:13.990 Removing: /var/run/dpdk/spdk_pid58131 00:19:13.990 Removing: /var/run/dpdk/spdk_pid58206 00:19:13.990 Removing: /var/run/dpdk/spdk_pid58324 00:19:13.990 Removing: /var/run/dpdk/spdk_pid58766 00:19:13.990 Removing: /var/run/dpdk/spdk_pid58835 00:19:13.990 Removing: /var/run/dpdk/spdk_pid58909 00:19:13.990 Removing: /var/run/dpdk/spdk_pid58925 00:19:13.990 Removing: /var/run/dpdk/spdk_pid59081 00:19:13.990 Removing: /var/run/dpdk/spdk_pid59097 00:19:13.990 Removing: /var/run/dpdk/spdk_pid59241 00:19:13.990 Removing: /var/run/dpdk/spdk_pid59257 00:19:13.990 Removing: /var/run/dpdk/spdk_pid59328 00:19:13.990 Removing: /var/run/dpdk/spdk_pid59346 00:19:13.990 Removing: /var/run/dpdk/spdk_pid59410 00:19:13.990 Removing: /var/run/dpdk/spdk_pid59428 00:19:13.990 Removing: /var/run/dpdk/spdk_pid59623 00:19:13.990 Removing: /var/run/dpdk/spdk_pid59660 00:19:13.990 Removing: /var/run/dpdk/spdk_pid59749 00:19:13.990 Removing: /var/run/dpdk/spdk_pid61071 00:19:13.990 Removing: /var/run/dpdk/spdk_pid61277 00:19:13.990 Removing: /var/run/dpdk/spdk_pid61423 00:19:13.990 Removing: /var/run/dpdk/spdk_pid62055 00:19:13.990 Removing: /var/run/dpdk/spdk_pid62267 00:19:13.990 Removing: /var/run/dpdk/spdk_pid62407 00:19:13.990 Removing: /var/run/dpdk/spdk_pid63046 00:19:13.990 Removing: /var/run/dpdk/spdk_pid63369 00:19:13.990 Removing: /var/run/dpdk/spdk_pid63515 00:19:13.990 Removing: /var/run/dpdk/spdk_pid64896 00:19:13.990 Removing: /var/run/dpdk/spdk_pid65153 00:19:13.990 Removing: /var/run/dpdk/spdk_pid65293 00:19:13.990 Removing: /var/run/dpdk/spdk_pid66674 00:19:13.990 Removing: /var/run/dpdk/spdk_pid66926 00:19:13.990 Removing: /var/run/dpdk/spdk_pid67069 00:19:13.990 Removing: /var/run/dpdk/spdk_pid68454 00:19:13.990 Removing: /var/run/dpdk/spdk_pid68894 00:19:13.990 Removing: /var/run/dpdk/spdk_pid69034 00:19:13.990 Removing: /var/run/dpdk/spdk_pid70514 00:19:13.990 Removing: /var/run/dpdk/spdk_pid70768 00:19:13.990 Removing: /var/run/dpdk/spdk_pid70913 00:19:13.990 Removing: /var/run/dpdk/spdk_pid72395 00:19:13.990 Removing: /var/run/dpdk/spdk_pid72650 00:19:13.991 Removing: /var/run/dpdk/spdk_pid72802 00:19:13.991 Removing: /var/run/dpdk/spdk_pid74278 00:19:13.991 Removing: /var/run/dpdk/spdk_pid74771 00:19:13.991 Removing: /var/run/dpdk/spdk_pid74917 00:19:13.991 Removing: /var/run/dpdk/spdk_pid75055 00:19:13.991 Removing: /var/run/dpdk/spdk_pid75474 00:19:13.991 Removing: /var/run/dpdk/spdk_pid76197 00:19:13.991 Removing: /var/run/dpdk/spdk_pid76588 00:19:13.991 Removing: /var/run/dpdk/spdk_pid77277 00:19:14.251 Removing: /var/run/dpdk/spdk_pid77718 00:19:14.251 Removing: /var/run/dpdk/spdk_pid78473 00:19:14.251 Removing: /var/run/dpdk/spdk_pid78878 00:19:14.251 Removing: /var/run/dpdk/spdk_pid80829 00:19:14.251 Removing: /var/run/dpdk/spdk_pid81269 00:19:14.251 Removing: /var/run/dpdk/spdk_pid81704 00:19:14.251 Removing: /var/run/dpdk/spdk_pid83786 00:19:14.251 Removing: /var/run/dpdk/spdk_pid84269 00:19:14.251 Removing: /var/run/dpdk/spdk_pid84787 00:19:14.251 Removing: /var/run/dpdk/spdk_pid85846 00:19:14.251 Removing: /var/run/dpdk/spdk_pid86173 00:19:14.251 Removing: /var/run/dpdk/spdk_pid87109 00:19:14.251 Removing: /var/run/dpdk/spdk_pid87436 00:19:14.251 Removing: /var/run/dpdk/spdk_pid88375 00:19:14.251 Removing: /var/run/dpdk/spdk_pid88698 00:19:14.251 Removing: /var/run/dpdk/spdk_pid89380 00:19:14.251 Removing: /var/run/dpdk/spdk_pid89650 00:19:14.251 Removing: /var/run/dpdk/spdk_pid89717 00:19:14.251 Removing: /var/run/dpdk/spdk_pid89759 00:19:14.251 Removing: /var/run/dpdk/spdk_pid90003 00:19:14.251 Removing: /var/run/dpdk/spdk_pid90182 00:19:14.251 Removing: /var/run/dpdk/spdk_pid90279 00:19:14.251 Removing: /var/run/dpdk/spdk_pid90373 00:19:14.251 Removing: /var/run/dpdk/spdk_pid90426 00:19:14.251 Removing: /var/run/dpdk/spdk_pid90456 00:19:14.251 Clean 00:19:14.251 10:30:27 -- common/autotest_common.sh@1453 -- # return 0 00:19:14.251 10:30:27 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:19:14.251 10:30:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:14.251 10:30:27 -- common/autotest_common.sh@10 -- # set +x 00:19:14.251 10:30:28 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:19:14.251 10:30:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:14.251 10:30:28 -- common/autotest_common.sh@10 -- # set +x 00:19:14.512 10:30:28 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:14.512 10:30:28 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:14.512 10:30:28 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:14.512 10:30:28 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:19:14.512 10:30:28 -- spdk/autotest.sh@398 -- # hostname 00:19:14.512 10:30:28 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:14.512 geninfo: WARNING: invalid characters removed from testname! 00:19:41.117 10:30:52 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:41.688 10:30:55 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:43.599 10:30:57 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:45.512 10:30:59 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:47.419 10:31:00 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:49.330 10:31:02 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:51.240 10:31:04 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:19:51.240 10:31:04 -- spdk/autorun.sh@1 -- $ timing_finish 00:19:51.240 10:31:04 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:19:51.240 10:31:04 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:19:51.240 10:31:04 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:19:51.240 10:31:04 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:51.240 + [[ -n 5423 ]] 00:19:51.240 + sudo kill 5423 00:19:51.251 [Pipeline] } 00:19:51.268 [Pipeline] // timeout 00:19:51.273 [Pipeline] } 00:19:51.288 [Pipeline] // stage 00:19:51.293 [Pipeline] } 00:19:51.308 [Pipeline] // catchError 00:19:51.318 [Pipeline] stage 00:19:51.320 [Pipeline] { (Stop VM) 00:19:51.333 [Pipeline] sh 00:19:51.616 + vagrant halt 00:19:54.158 ==> default: Halting domain... 00:20:02.304 [Pipeline] sh 00:20:02.589 + vagrant destroy -f 00:20:05.136 ==> default: Removing domain... 00:20:05.149 [Pipeline] sh 00:20:05.434 + mv output /var/jenkins/workspace/raid-vg-autotest_2/output 00:20:05.445 [Pipeline] } 00:20:05.462 [Pipeline] // stage 00:20:05.468 [Pipeline] } 00:20:05.484 [Pipeline] // dir 00:20:05.490 [Pipeline] } 00:20:05.517 [Pipeline] // wrap 00:20:05.542 [Pipeline] } 00:20:05.577 [Pipeline] // catchError 00:20:05.584 [Pipeline] stage 00:20:05.585 [Pipeline] { (Epilogue) 00:20:05.596 [Pipeline] sh 00:20:05.877 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:10.091 [Pipeline] catchError 00:20:10.093 [Pipeline] { 00:20:10.106 [Pipeline] sh 00:20:10.393 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:10.393 Artifacts sizes are good 00:20:10.403 [Pipeline] } 00:20:10.418 [Pipeline] // catchError 00:20:10.430 [Pipeline] archiveArtifacts 00:20:10.437 Archiving artifacts 00:20:10.546 [Pipeline] cleanWs 00:20:10.559 [WS-CLEANUP] Deleting project workspace... 00:20:10.559 [WS-CLEANUP] Deferred wipeout is used... 00:20:10.566 [WS-CLEANUP] done 00:20:10.568 [Pipeline] } 00:20:10.584 [Pipeline] // stage 00:20:10.589 [Pipeline] } 00:20:10.603 [Pipeline] // node 00:20:10.608 [Pipeline] End of Pipeline 00:20:10.649 Finished: SUCCESS